-
-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compress Database Location Files #954
Conversation
Adds two NPM scripts to handle compressing and decompressing the locations data in a reproducible way. Needs to be tested on Windows.
Does not remove them from history.
The location of the compressed file now resides in a folder structure that mimics the database. Just incase other files are added in the future.
This class is responsible for decompressing database archives. It's initialized on server start, but the decompression mechanism only fires when the target database directory is empty. Adds roughly 1.5 seconds to initial server start (in my limited testing). The gulp build script has been updated to ignore the location database files. Needs to be tested.
God damn it, Steve.
Prevents certain hidden meta data files from getting caught up in the archive.
This gulp task runs before any of the database assets are copied over. It ensures that all database archives are decompressed into their respective directories. This means that the archive has priority when building. All files within the target database directories will be removed and replaced with the contents of the database archives. Uses the same two dev dependancies as the NPM 'database' scripts.
I'm on the fence about reintroducing the It could be modified to only run in a non-compiled environment, so that developers would not have to run the npm command after initially cloning the project. However, they would still have to remember to run the command manually after location data has been updated. Due to the 7-zip executable, it can not be ran after compilation. I'm open to suggestions. Maybe this is good enough? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good.
I could see it being useful having the 7z auto extract on run in the IDE to avoid potentially forgetting to extract updated DB files, but I wouldn't consider it a blocker to implementing this
Reintroduces the the `DatabaseDecompressionUtil` class. This baby will automatically decompress database archives if their target directory is empty or does not exist. It will only run in a non-compiled environment, so only developers (and 31337 linux h4x0rs) will be able to utilize it.
We've created our own Git Large File Storage (LFS) server due to the excessive cost of GitHub's bandwidth. Did you know all of their Ethernet cables are actually gold coated diamonds? This PR reverses the work done in #954 to compress large location JSON files into a 7zip archive and handle the (de)compression of the archive. Only JSON files within the `project/assets/database/` directory that are larger than 5MB have been included in LFS. This translates to all of the `looseLoot.json` files. The rest are small enough to be included in the base repo. A `.lfsconfig` file has been added to the root of the project to alert git to the presence of the custom LFS server. This public server is read-only. Write access is only available to developers within the Single Player Tarkov GitHub organization. <img src="https://github.com/user-attachments/assets/7ddfec9b-5a9a-42e6-806d-fd419e4eaa4f" width="250">
Due to LFS storage issues... This PR removes all current LFS files (the location loot files) and replaces them with a single 7-zip archive. The archive is stored in LFS, but has decreased in size by roughly 95%.
The location
.json
files are now git-ignored.There are two new npm commands to aid in working with the archive:
npm run database:compress
Compresses the JSON files into an archive which can be committed into the project.
npm run database:decompress
Decompresses the archive into the original JSON files located in the working directory.
The gulp file that handles builds has been updated to ensure that the archive files are always used when a build is processed, regardless of if the JSON files are already present in the working directory.