There is a use case where about 10000 records are inserted every few seconds after some processing.
There was a suggestion that dropping indexes before the data insert would improve the speed of the inserts.
The postgres docs recommend removing the indexes and foreign keys.
Creating an index on pre-existing data is quicker than updating it incrementally as each row is loaded
What is the real impact for this specific case?
Indexes Active
The message from the process:
Build execution time: 372.59 s
Build execution time: 261.93 s
Build execution time: 300.37 s
Insert time:
Build complete for: [9830 record]
Performance build: Query execution and processing time 3.271 s
Performance build: Catalogue Insert time 5.519 s
Performance build: COMPLETE total time 8.790 s
Mean execution time from pg_stats: 209.53 ms
Indexes removed
Dropped FK:
ALTER TABLE DROP CONSTRAINT ;
Dropped the index:
DROP INDEX CONCURRENTLY ix_multi_catalogue_version_store_code;
Reset stats:
select pg_stat_statements_reset();
The message from the process:
Catalogue build execution time: 238.12 s
Insert time:
Build complete for: [9830 normal products]
Performance build: Query execution and processing time 3.288 s
Performance build: Catalogue Insert time 5.068 s
Performance build: COMPLETE total time 8.356 s
Mean execution time from pg_stats: 118.53 ms
Conclusion
Does it work to speed up performance: Yes.
The insert time is reduced when foreign key and indexes have been removed.
Records insert on the db level 76% faster.
However the process sending the records to the database is still probably impeded by network and cpu and only speeds up that process only 8.89%.
The whole process was made faster by 26%.
Is the added effort of removing indexes prior and adding them back afterwards enough to warrant this speed increase.
Perhaps one for you to decide. Also important to test on your own case.