No, EDO is an example of what Les Hatton, author of Safer C and software safety guru, told us when he came and gave a lecture to my department. Research has shown that there is about an 12% chance that fixing a bug, leads to the introduction of a bigger and worse bug. Since EDO is currently going through an intense period of bug fixing, where the devs and QC departments are also at home and thus with access to limited resources on which to test (because you can't all take the test machines home, they are either split across the teams or are centralized at the office with limited access) and with less effective communication tools the chances of catching these new bugs before they make it in the wild are also smaller. Let's face it: working together over Zoom or Teams is less effective than walking over to a colleague and having a hands-on session behind a development/test system. I work in software development too and I've noticed this myself as well.
So not so much the butterfly effect, but more the "let's quickly roll out these bug fixes". There will be a burn down of these bugs during the intense bug fixing period that EDO is in, but lets be realistic: Devs are humans, humans make mistakes, mistakes lead to bugs, some of which will make it through QC and only the community will run into them. As long as the community is willing to notify the devs of these bugs, with as much detail as possible about how and when they occur, can the dev team tackle these. Complaining that something is broken in a deep level forum post will mean that this bug is most likely not noticed and thus getting prioritized for fixing. Also remember that some bugs are interacting bugs, where the cause is not the game, but the driver, the OS, extra background software, runtime framework to port Windows apps to Linux, etc.
For instance: Yesterday I was testing some CUDA code on our A100 server and every CUDA app I tried running would fail at the call of the first CUDA API call. Even those that had previously worked. So, I had to do some investigating to see where the issue lay. The only difference that I could see that happened between the last time I had used the server machine and now was that the server had been rebooted. I checked the status of the A100's (we have two) and I noticed that the first card was set to MIG mode, while the second was not. Now I've been experimenting with that mode for my work, but MIG mode should not survive a reboot (the MIG documentation states this explicitly), so this was suspect. I used NVIDIA's tooling to force the card back into normal mode and hey presto, my CUDA apps worked again.
So as you can see, the behavior I got initially pointed to a problem with the CUDA installation, and if I was maintainer of the system, that would have been one of the first things to try, but the cause was something outside the CUDA software's control.