Containerization seems overrated. I haven’t really played with it much, but as far as I can tell, the way it’s most commonly used is just static linking with extra steps and extra performance overhead. I can think of situations where containers would actually be useful, like running continuous integration builds for someone you don’t entirely trust, but for just deploying a plain old application on a plain old server, I don’t see the point of wrapping it in a container.
Mac OS 7 looked cool. So did Windows 95.
Phones are useful, but they’re not a replacement for a PC.
I don’t want to run everything in a web browser. Using a browser engine as a user interface (e.g. Electron) is fine, but don’t make me log in to some web service just to make a blasted spreadsheet.
I want to store my files on my computer, not someone else’s.
I don’t like laptops. I’d much rather have a roomy PC case so I can easily open it up and change the components if I want. Easier to clean, too.
Idea is that you can have different apps that require different versions of dependency X, and that could stop you with traditional package managment, but would be OK with containers
Haven’t seen macOS 7,but 100% agree on Windows 95. 2000 is better though.
Still can’t believe someone actually believe they are
100% agree
Sometimes you just have 1 hour free, and that’s not enough to go home, but too big to just kill it. That’s when laptop is great. Also, sometimes going outside and do stuff feels better than doing it at home.
Also see Mac OS 8, which added a shaded-gray look not unlike Windows 95, and Mac OS 9, the last version of the classic Mac OS. These versions have a lot more features than the older version 7, but they also take much longer to boot—so long that Apple added a progress bar to the boot screen!
Idea is that you can have different apps that require different versions of dependency X, and that could stop you with traditional package managment, but would be OK with containers
That’s what I mean by “static linking with extra steps”. This problem was already solved a very long time ago. You only get these version conflicts if your dependencies are dynamically linked, and you don’t have to dynamically link your dependencies.
Don’t I? Recompiling avoids ABI stability issues and will reliably fail if there is a breaking API change, whereas not recompiling will cause undefined behavior if either of those things happens.
That’s why semver exists. Major-update-number.Minor-update-number.Patch-number
Usually, you don’t care about patches, they address efficency of things inside of lib, no Api changes.
Something breaking could be in minor update, so you should check changelogs to see if you gonna make something about it.
Major version most likely will break things.
If you’ll understand this, you’ll find dynamic linking beneficial(no need to recompile on every lib update), and containers will eliminate stability issues cause libs won’t update to next minor/major version without tests.
Still, it’s going to take some time, every time some dependency(of dependency(of dependency)) changes(cause you don’t wanna end up with critical vulnerability). Also, if app going to execute some other binary with same dependency X, dependency X gonna be in memory only once.
Compared to the downsides of using a container image (duplication of system files like libc, dynamic linking overhead, complexity, etc), this is not a compelling advantage.
Also, if app going to execute some other binary with same dependency X
Containerization seems overrated. I haven’t really played with it much, but as far as I can tell, the way it’s most commonly used is just static linking with extra steps and extra performance overhead. I can think of situations where containers would actually be useful, like running continuous integration builds for someone you don’t entirely trust, but for just deploying a plain old application on a plain old server, I don’t see the point of wrapping it in a container.
Mac OS 7 looked cool. So did Windows 95.
Phones are useful, but they’re not a replacement for a PC.
I don’t want to run everything in a web browser. Using a browser engine as a user interface (e.g. Electron) is fine, but don’t make me log in to some web service just to make a blasted spreadsheet.
I want to store my files on my computer, not someone else’s.
I don’t like laptops. I’d much rather have a roomy PC case so I can easily open it up and change the components if I want. Easier to clean, too.
In case you’re curious, here’s a browser-based emulator running Mac OS 7.1.
Yes, i agree, it looks awesome.
Also see Mac OS 8, which added a shaded-gray look not unlike Windows 95, and Mac OS 9, the last version of the classic Mac OS. These versions have a lot more features than the older version 7, but they also take much longer to boot—so long that Apple added a progress bar to the boot screen!
That’s what I mean by “static linking with extra steps”. This problem was already solved a very long time ago. You only get these version conflicts if your dependencies are dynamically linked, and you don’t have to dynamically link your dependencies.
Yes, you don’t have to dynamically link dependecies, but you don’t want to recompile your app just to change dependency version.
Don’t I? Recompiling avoids ABI stability issues and will reliably fail if there is a breaking API change, whereas not recompiling will cause undefined behavior if either of those things happens.
That’s why semver exists. Major-update-number.Minor-update-number.Patch-number Usually, you don’t care about patches, they address efficency of things inside of lib, no Api changes. Something breaking could be in minor update, so you should check changelogs to see if you gonna make something about it. Major version most likely will break things. If you’ll understand this, you’ll find dynamic linking beneficial(no need to recompile on every lib update), and containers will eliminate stability issues cause libs won’t update to next minor/major version without tests.
What’s so horribly inconvenient about recompiling, anyway? Unless you’re compiling Chromium or something, it doesn’t take that long.
Still, it’s going to take some time, every time some dependency(of dependency(of dependency)) changes(cause you don’t wanna end up with critical vulnerability). Also, if app going to execute some other binary with same dependency X, dependency X gonna be in memory only once.
Compared to the downsides of using a container image (duplication of system files like libc, dynamic linking overhead, complexity, etc), this is not a compelling advantage.
That seems like a questionable design choice.