Docker hub is no more
Yes, the title is clickbaity.
As you may have heard, Docker the company,
decided un-decided to purge open source docker hub images, unless the OSS maintainers pay them $420 a year if they want to keep their OSS images available on the hub.
The unfortunate thing is, this is not the first time something like this has happened. One day everything is fine, and on the next, every DevOps person scrambles to fix broken pipelines due to a missing package like NPM’s pad-left incident. Or NuGet/Npm/Docker/etc is down due to certificate expiration, networking outage, blown up disk and the likes. Incidents where a critical infrastructure that a major part of your company’s operations depend on, stops working or is completely gone.
If you are anything like me, and work at an organization big enough to be able to afford a few CPU cores and couple of terabytes of storage, you may have already thought of this situation and implemented a contingency plan or are in the works of doing so. If yes, great! You can stop reading here and work on that. BYE!
If not, please continue reading, this might save your bacon from any similar situations.
Ok, but what should I do?
That’s simple - own your dependencies (not just Docker images) and stop depending on external registries!
Because bad things can happen if you don’t. You should host all your packages & dependencies regardless of the current situation with Docker hub.
What happens if
package name disappears tomorrow like NPM’s pad-left incident ?
What happens if a dependency gets re-published with malicious code under the same version?
What happens if the registry you’re pulling those dependencies from goes down or you’re rate limited, and your builds are broken for 6-12 hours?
What happens if you pull a dependency that has an incompatible license and you end up on the receiving end of a lawsuit?
I know, I do sound like a shill for some of the products I’m gonna list below, but I promise, I’m not.
All of these questions came from the experience of having to deal with the consequences of most of those (except the lawsuit one).
What are my options?
Use a 3-rd party Docker registry, or package feed server where you can host your own packages & images, and all the ones you depend on.
Here’s some options that might work well for you
Option 1 - Use The cloud!
Those don’t solve your problem entirely, or cover all points, but they are a start. At least you’re paying and know you won’t loose things on a whim.
- One nice thing about the cloud options is that you can you don’t have to manage the infrastructure yourself.
- Globally distributed (usually)
- Fast downloads in & out of their networks
- May not cover all of your requirements.
Docker specific options
Universal package registries that can host NuGet, npm, Bower, Vsix, Maven, PHP Composer, Python, Ruby Gems and the likes.
Option 2 - Self Host!
Yes, self hosting may not be the best option for your case, but at least you have the option. Also, you might have to self host in some situations where external bandwidth is limited, expensive, or unreliable. One such case might be, you’re located in Asia, Australia, Africa or South America.
- You OWN your infrastructure, configuration and everything
- Costs less! (usually)
- Faster transfer speeds if your build servers are also on premise.
- You have to manage it.
Here’s a couple of options that you can self host.
My personal pick is Inedo’s ProGet, since it’s free for commercial use, and you have to pay only for the advanced features ( multi node support, ultra fine grained access rules), which you may or may not need, and it has a nice UI to configure everything.
Assessing risk in software delivery and team workflow
Assessing risk in software delivery and team workflow is a critical aspect of any organization, regardless of its size. Larger organizations typically have more complex systems, with multiple teams working on various aspects of the software lifecycle. This can increase the risk of miscommunication, errors, and potential security vulnerabilities. In contrast, smaller organizations may have simpler systems and fewer teams, which can make it easier to coordinate and manage risk. However, smaller organizations may also lack the resources and expertise to effectively manage risk, making them more vulnerable to potential threats.
The criticality of software delivery and team workflow is another important factor to consider when assessing risk. If software is mission-critical to the organization’s operations or directly impacts customer experience, the risk associated with its development and deployment becomes even more significant. In such cases, it is essential to prioritize risk management and invest in measures to reduce the likelihood of errors or security breaches. This may include investing in tools and infrastructure, such as self-hosting dependencies or using third-party registries, to reduce the reliance on external services that may be prone to outages, rate limiting, or other issues.
To effectively assess risk in software delivery and team workflow, organizations should consider both their size and the criticality of the software being developed. By doing so, they can identify potential vulnerabilities and implement measures to minimize risk, ensuring the continued success and stability of their software and systems.
In conclusion, the recent events surrounding Docker hub and other incidents in the past highlight the importance of properly assessing the risk and consequences of not taking control of your dependencies and solely relying on external registries. By self-hosting your dependencies or using a reliable 3rd-party Docker registry or package feed server, you can prevent disruptions to your operations and mitigate potential risks. Weigh the pros and cons of each option and choose the one that best fits your organization’s needs and requirements. Remember, taking measures to own your dependencies can save you from potential headaches and ensure the smooth running of your projects.