You run a query. You wait patiently for your query and results to return. You want the results in Excel. You ask SQL Developer to ‘Export.’ SQL Developer seems to start over by running the query again. You are no longer waiting patiently.
How can you avoid this?
The solution is to ensure all of the data has been fetched into the grid before you ask for the Export.
What the heck is he talking about, fetched?
Running a query in Oracle has several distinct steps.
You submit your query.
The database engine parses the query and identifies an execution plan. If it doesn’t already have one cached, it will build a new one. Hopefully this shouldn’t take but a millisecond or two.
The database runs your query. It may spawn 1 or more processes to get the data, do all the sorts, merges, etc. You don’t really care about how it gets your data, but it’s there to see in the plan.
The database tells the requestor that their query has executed.
You ask for the data.
You asking for the data is usually an assumed part of running the actual query. What good is running a query without seeing the results? However this last phase is what is known as a ‘fetch.’ Records are retrieved in batches. The size of the batch is dependent on what you have ‘Sql Array Fetch Size’ set to. The default is ‘100.’
This means that SQL Developer will by default return the first 100 rows of your query. You can read more of the JDBC nitty-gritty here if you’re morbidly curious.
What you need to know is that many of your queries return MORE than what is shown on the initial fetch.
So how do you get the rest of the records?
Click in the grid so it has focus.
This will force the rest of the records back into the grid.
And now you will see something like this
Now with all records returned, Export will use the data in the grid instead of re-querying the database.
Clear as mud?
But WHY does SQL Developer force the 2nd query execution?
If you do force all of the rows back into the grid, you are going to be consuming a decent amount of memory. Some query result sets are larger than others, and not all of us have 64 bit monsters to run our tools on. So, in order to conserve machine resources and memory, we just run the query again and write the data directly to the destination and bypass the data grid.