This article is half-done without your Comment! *** Please share your thoughts via Comment ***
This is one of the important post for all PostgreSQL Database Administrator who are dealing with Bulk Insertion in their PostgreSQL Server.
We should optimize Bulk Insertion operation because It is also utilizing more system resources.
Few steps to increase the performance of Bulk Insertion:
- If It is only one time data load, remove all Index and create after INSERT. For regular Bulk Insertion, look into Index part and try to optimize it.
- Take a backup of all the Triggers and drop all the Triggers.
- Use COPY command instead of INSERT.
- If you can not use COPY command because of of INSERT multiple value, you should not give more than 1000 multiple values in single INSERT. All INSERT multiple values stored in the memory area which may decrease the INSERT performance.
- Disable other database logging process.
- If BULK INSERTION is running with a single transaction, you should also disable all FOREIGN KEYs and later you can enable it.
- Change synchronous_commit=off to reduce the cost of committing.
- Change Checkpoint default configuration, Increase the checkpoint_segments value.
- Increase the size of shared_buffers, effective_cache_size, work_mem.
- Prepared all statements on server side.
- Disable Auto-Vacuum on Table.
- Disable Auto Database Backup process.
- Increase the system resource (Memory, Disk, CPU), If it is required.
- Use SSD type of Disk for faster storage.
Note: Whatever you changed for BULK INSERTION, please do not forget to rollback after completion of BULK INSERTION.