Skip to content

add a flag to control the hash checking #77

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: 2.0
Choose a base branch
from

Conversation

Trial97
Copy link

@Trial97 Trial97 commented May 4, 2025

Address the compatibility issue with zlib-ng-compat, which is now shipped in place of zlib in various Linux distros. This change introduces an option to disable hash mismatches reported on processors.
It doesn't need to be visible in the project, as I plan to integrate it with the ForgeWrapper.

@Jonathing
Copy link
Member

I would prefer if this were an additional constructor parameter and/or command-line argument. The way you wrote this looks like you're going to modify this field with reflection. Which is fine, but if there's no other way to access it then it defeats the purpose.

@Trial97
Copy link
Author

Trial97 commented May 4, 2025

Sure thing, I updated the PR to make it a command-line argument.

@Jonathing
Copy link
Member

@LexManos

@LexManos
Copy link
Member

LexManos commented May 4, 2025

The better option, since we have to rebuild the installers anyways, is to rebuild them to just specify the correct compression level. It's already been fixed ij modernnversions it's just older shit that's the issue.

@Trial97
Copy link
Author

Trial97 commented May 4, 2025

What is the correct compression level?
And it is enough to set it here: https://github.com/MinecraftForge/Installer/blob/2.0/src/main/java/net/minecraftforge/installer/actions/OfflineAction.java#L80

Or does it need to be changed in multiple places?

Also, it would be nice to still have the option to skip the hash checks even if it is fixed, but as long as it is fixed, I do not complain.

@LexManos
Copy link
Member

LexManos commented May 4, 2025

MinecraftForge/JarSplitter#2
MinecraftForge/ForgeAutoRenamingTool@f298a56
Its a matter of 'fixing' the things that have hash issues.

@Trial97
Copy link
Author

Trial97 commented May 7, 2025

Continuing from: #78 (comment)
If the plan is regenerating old versions, can you merge this PR so the flag is also included?
My reason is that I want to ensure that in the future, if there are similar hash issues, there is a way to control it

@LexManos
Copy link
Member

LexManos commented May 7, 2025

Honestly I don't want to include this because knowing when things are going wrong is important. This whole issue is that a somewhat common Linux library has a behavioral bug in it so that it doesn't respect standard defaults.

Adding this flag would just cause people to not report issues.
Currently there is no "plan" to address this beyond telling people to fix their zlib install.

IF/when someone wants to work on making a installer updater pass that fixes this properly then I may consider that side. But reguarding this flag, it is not needed as the update will fix the issue.

Also need a way to actually validate that this is an issue on our end. As the assumption is that it exists. But never been able to reproduce. If someone can produce like a socket container or virtual machine image. I would appreciate it.

@Trial97
Copy link
Author

Trial97 commented May 7, 2025

I'm not sure I would be able to regenerate the old version myself as that is a bit too much for me.

Not sure exactly where the issue is or why it is happening, but I had people complaining about this: PrismLauncher/PrismLauncher#3406 and https://www.reddit.com/r/PrismLauncher/comments/1ifpbgp/issue_with_modpacks/ . And from the change you linked previously, it should have already been fixed a long time ago, but this was reported this year.

If this is not desired as a public flag, can it at least be accepted as a private one, as I proposed originally?
The proposed change: https://github.com/MinecraftForge/Installer/compare/f06ce3e5b9f84fc3dd6c96f3469ff29b58c2430a

My use case is just wanting to add a way for Forgewrapper(to modify the field with reflection) to be able to bypass this check without needing to reimplement the logic.

From my knowledge, this setting is already present in the FTB App, but I would want to avoid reimplementing the install process.

@LexManos
Copy link
Member

LexManos commented May 15, 2025

Okay an update on this.

Regarding this PR:
It is not complete, it only disables the one check for newly added files. It doesn't bypass the check for existing files. Which will result in file churn when installing multiple times. Such as this guys use case, running the installer every launch.

Regarding the actual issue:
It's being looked into. Basically lib-ng is explicitly designed to not produce the same output when compressing files as zlib. So the system that has worked for a decade where we expect binary identical compression simple does not work when people use zlib-ng.

Okay so I have a few options, All of which have the con of needing to rebuild every installer.

  • Simplest: Strip all hashes from the json files

    • Pro: Simplest, just deleting data
    • Pro: No change to installer format so downstream projects don't need to care.
    • Con: Makes installers re-run every task every install
    • Con: Doesn't detect file corruption which is the main point of the check
    • Con: Not sure if this actually works in the official launcher, need to verify that it allows for empty hashes in the version.json
  • Store: Disable compression on all generated jar files.

    • Pro: Bypasses this issue forever
    • Pro: Same hash on any compression library.
    • Pro: Faster runtime as java doesn't need to decompress Minecraft itself
    • Pro: No change to installer format so downstream projects don't need to care.
    • Con: Increase in install size, However, honestly not sure if that's a big deal.
      We already 'waste' space by having things we don't strictly use, but are useful intermediate steps.
      Such as the unmodified dedicated server jar.
      Not deleting install time only libraries, etc..
      We could add a 'delete install temp files' post processor, which would save a bit of space.
      I want to do this anyways, so i'll be looking into the feasibility when I can.
      • On 1.19.4 it is:
        • Client
          • Extra: 10.5 -> 16.3
          • Slim: 11.8 -> 23.0
          • SRG: 17.7 -> 41.7
          • Patched: 5.3 -> 12.4
          • Process Total: 39.3 -> 93.4 = 54.1
          • Full Install: 102 -> 156
        • Server
          • Extra: 4.9 -> 9.3
          • Slim: 8.9 -> 17.7
          • SRG: 13.2 -> 32.0
          • Patched: 5.3 -> 12.3
          • Process Total: 32.3 -> 71.3 = 39
          • Full Install: 157 -> 196
  • Deep Check: Keep compression, but convert hash check to verify contents of the jar files not the file itself

    • Pro: Keeps integrity checks in the installer
    • Pro: Keeps file sizes down
    • Con: Requires version spec change, and downstream projects to put effort in. (AKA major fucking headakes)
    • Con: Not sure if this actually works in the official launcher, need to verify that it allows for empty hashes in the version.json
  • Hack: Add flag to disable hash checks (this PR)

    • Pro: Simple, doesn't fix the issue but bypasses it.
    • Con: Requires user interaction to enable the flag if they run into issues. Perhaps add a 'ran into compression differences, it may work, bit okay to ignore' if the GUI is used?
  • Duplex: Add zlib-ng specific hashes and succeed if either hash matches

    • Same pros and cons as Deep

I have been able to reproduce the issue, FAINLLY, be creating a docker container.
I am currently in the process of writing proper tests for the old installers to validate any fixes we have.
Once that is done I decide which fix to pursue.

@LexManos
Copy link
Member

This is on the back burner as its not a super critical error for already stated reasons.
Wanted to give an update. I've gone down the deep rabbit hole that is zlib integrations with the jre.
When a distributor decides to build OpenJDK, they have a choice to statically or dynamically link zlib. A lot of modern OpenJDK variants simply statically link it and this is what I was having so much trouble getting a reproduction case even with zlib-ng installed at the system level. As most distros on foojay use static linking.

But once I was able to get a dynamically linked distro on ubuntu, I am able to reliably reproduce the issue.

Some fun things:
A fairy take of the zlib/zip compression in OpenJDK
simonis/zlib-bench

The basic take from that is that zlib is very fragmented in modern development. Tons of variants, some that follow spec, some that dont. Some that are better at one aspect, but horrible at others. For our purposes tho, any solution that mitigates zlib-ng's issues yet still keeps compression will just cause issues with other implementations.

I am still not willing to just strip out caching that works for the VAST MAJORITY of users for the few who decide to go out of their way to install a custom implementation and a statically linked Java distro. (See my unhinged rants about this on discord if you wanna see how hard it was to get a reliable reproduction case)

So this leaves us with two options. Store, I rebuild all the old installers/tools used by them to use no compression when creating the jars needed. Or Deep, making the installer check the uncompressed contents of processor steps.

With hard drive space being cheap, I'm really leaning twards Store. So i'm looking into what processors we run, and if its possible to rebuild the installers to use versions that function with that ability.

Some statistics on issues with old versions:

  • Installers:
    • Versions:
      • Missing:
        • 1.1->1.5.2: 568
        • 1.6.1: 2
        • 1.6.4: 4
      • v1:
        • 1.5.2->1.12.2: 2013
      • v2:
        • 1.12.2: 9
        • 1.12.3+: 2174
    • No Executable Server Jar: 869
      • 1.17.1 -> 1.20.2 (This is the whole module system needing command line arguments. I plan on eventually writing a bootstrap for these old versions
    • Running Server Failures:
      • Most of these are due to a incompatibility with a Change in Java8 Which I have an idea how to fix, but for now using an older java8 is fine.
      • 1.14.4: 34 of 194
      • 1.16.3: 27 of58
      • 1.16.4: 55 of 55
      • 1.16.5: 122 of 136
      • 1.17.1: 14 of 114
      • 1.18: 2 of 16
      • 1.18.1: 11 of 77
      • 1.18.2: 17 of 156
      • 1.20.2: 1 of 39
      • 1.20.4: 4 of 68

I'll eventually be setting up a system to do automated tests of all older client installations, but the server is the easiest for now.
Anyways, @Trial97 If you, or anyone else who wants to help, wanna hop on the discord, and get up to speed on how the tests work and what work needs to be done on the InstallerRewriter side. I'm more then willing to talk it through with you guys.

There are other projects that are on my higher priority, I wanna get a initial release of the new toolchain out. so if you wanna see this any time soon, it'd be worth it to help.

@Trial97
Copy link
Author

Trial97 commented Jun 6, 2025

I must admit that I'm still unclear on what exactly is expected from me regarding the installer processor work. If there is any form of documentation available on how the installer and its processors function — particularly regarding how they are initialized and managed — I would greatly appreciate being pointed to it, as I am not yet familiar with the surrounding ecosystem.

I did make an effort to review the relevant code, but I’m currently unsure where the processor definitions are populated from, which makes it difficult to fully grasp the workflow or identify the appropriate starting point.

From what I can tell so far, addressing this issue seems to involve a significant refactoring of the installer infrastructure. Given that I have had no prior interaction with this codebase, this appears to be too large in scope for me to take on at this time. As such, I would prefer to focus solely on this topic if I am to contribute, rather than shift to unrelated or preparatory tasks.

@LexManos
Copy link
Member

LexManos commented Jun 7, 2025

If that's the case then my answer is what it has been for a while.
You have the tools/ability in place already to work around this issue on your end.
For us to address it properly is a rather monumental amount of work that involves making sure we don't break all the existing installs/installers in the process.

There are far bigger priorities at this point, combined with this being a highly niche issue. Means that this is pretty much my.lowest priority. So it'll be addressed eventually.

Until then you can detect if a non standard zlib is installed on your end. And then nuke the hash checks yourself. This PR is not an acceptable solution as it is incomplete and requires the work to be done to retroactively deploy this anyways.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants