Opened on 08/02/2018 at 01:59:48 PM
Closed on 03/17/2019 at 08:53:22 AM
#6831 closed change (rejected)
Deduplicate domains sources for content filters
Reported by: | mjethani | Assignee: | |
---|---|---|---|
Priority: | P2 | Milestone: | |
Module: | Core | Keywords: | |
Cc: | kzar, sergz, hfiguiere | Blocked By: | #6834 |
Blocking: | #6729 | Platform: | Unknown / Cross platform |
Ready: | no | Confidential: | no |
Tester: | Unknown | Verified working: | no |
Review URL(s): |
Description (last modified by mjethani)
Background
With #6815 we started deduplicating the Map objects created out of the domain sources for filters. Now the next step is to deduplicate the domain source itself, which is a "sliced string" in V8 (as explained in the Slicing section in #6729) holding on to the filter text. We could slice out the rest of the filter text and set the value of domainSource to the corresponding key in the knownDomainMaps object. In the text getter for ContentFilter we could reconstruct the filter text by combining the value of domainSource with the sliced out part of the filter text. This would free up at least the memory currently occupied by the domain part of the filter text.
This would be trickier to do for blocking filters but it can be done easily for content filters because of the simpler syntax.
In order for this to have any effect on the memory usage though, first and foremost the Filter.knownFilters object will have to give up its references to the filter text and start indexing filters by a hash of the text instead (see #6834).
See #6729 for more background.
What to change
To be determined.
I'm closing this because we are not going to do this. It's not worth trying to reduce the amount of text in memory, rather it's better to try to reduce the number of objects instead.