Replies: 1 comment
-
|
The serialization mechanisms would need to be changed to use proto buffers instead of the JSON serialization it does currently. You should be able to inherit from the base classes and implement your own serialization. If I implemented proto buffer style serialization, I would need to implement it in a way that doesn't break the existing JSON serialization. I'll keep this in mind. Thank you for the suggestion and feedback. Update: Also, I did ask GitHub Copilot to take a look at implementing this. You can see the PR here: #86 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I see we have a new method to serialize/deserialize to binary stream but its still the same json call under the hood. Its taking 3-4 seconds to load an index of around 4000 records for me at the moment. Not terrible but problematic. I was wondering if we could bypass the libraries built in serialization and use proto buffers to serialize the instance of the index/sharpvector in memory. This way, I could quickly switch between databases without the cost of the current cold start. Is it something anyone has tried. I pulled a fork of the code and gave it to a bot to pull apart to see if there would be any issues doing it and it did say it wouldnt work because not everything is designed to be serialized like this but I am not sure if that means everything needs to be or just the important parts?
Beta Was this translation helpful? Give feedback.
All reactions