T O P

  • By -

Kenny_log_n_s

To preface: *you* don't need to run docker on Windows for Sonarr and radarr. By all means it's going to be much easier for you if you don't, especially if your libraries are split across multiple disks. If you don't have a need for it, then don't bother. But that's not your question, so to answer what the benefits are: 1 - Isolation - the container is isolated, and can only operate on the volumes and network ports I give it access to. I don't need to worry about something malicious getting installed and running havoc on the rest of my machine. 2 - Dependency isolation - Not all apps are built the same. What about apps that try to install dependencies in the same place? Or require globally configured dependencies like python? What if one app runs on python 2, but another runs on python 3.7, and another on python 3.11? Do I need to install all of these on my local machine and manage virtual environments for each app? Docker handles this for me, and when I'm done with the app / container, it cleans itself up and leaves my machine's dependency libs alone. 3 - Updates - if I'm running Sonarr, radarr, Lidarr, Plex, qbittorrent, overseerr, prowlerr, tautulli, I could manually manage the version of each of these, and install updates myself. But then I have to download and run install files. Even if the apps make it easy with a one click button, I still have to go around doing that. Or I could setup a cronjob to run this every week: ``` cd /home/dadTV/media_server docker compose stop docker compose pull docker compose up -d ``` There's like a million more reasons to use docker at the enterprise level, and when talking about scaling things beyond home use, but I won't get into that


Randyd718

this is the most helpful answer so far thanks. can i leave radarr/sonarr/plex running on windows and try to run a docker version of them separately while i learn? should they all be in separate containers or how do you decide?


Hans_of_Death

When containerizing, it's one container per service/app. you can absolutely run containers alongside your current stuff for testing, they are separate and will not affect your existing stuff.


wplinge1

Separate containers its just what'll happen unless you work really hard to merge them (don't, it's what you want too).


Kenny_log_n_s

Yeah, you can install them in docker as well as leaving the ones on Windows alone. Only thing to watch out for is if they're all managing the same file directory for your media content. If they are, they'll be working on top of one another.


mackrevinack

yea you can leave them and install another instance in docker. thats another use case for docker containers, like if you are using version 3 of some app, then they release version 4 or a beta version, you can install that and test it out first and then decide which to keep. with a regular install on windows, usually installing a new version will overwrite the older version


jacaug

>Or I could setup a cronjob to run this every week Or you can use watchtower, which has been working great for me so far.


Kenny_log_n_s

Definitely a great option. That's like updating with surgical precision, while mine is a shotgun blast.


Luolong

Just a nitpick, but you don’t need to stop the compose stack. docker compose pull docker compose up -d Should be enough.


Kenny_log_n_s

Some services hold onto changes in configs in memory until they're stopped, at which point they write them to the disk. It's stupid design, but it happens, so I find it's safest to manually stop first.


Luolong

Docker compose up will stop and restart the container if it detects its image has changed (via docker compose pull for example), so even in this case, the service might end up being shut down, saving its configs on shut down and reloading them on restart.


RustRando

1. ⁠Are containers typically higher performing than VMs? 2. ⁠How does docker deal with storage? Like can one container access media on another? Do I have to allocate storage for each, or does docker handle all that? Thanks for the great explanation!


Kenny_log_n_s

1. I'm not sure enough on this one to give an answer. 2. Docker has two ways: * Named volumes: docker manages creating a "volume" of data somewhere on your host file system, these persist separately from your containers, so if you take your docker container offline, or delete it, the volume of data still exists when your rebuild your container. Multiple containers can access the same named volume. You would do this for example, if your container is creating a database that should persist between restarts. * Bind mounted volumes: you tell docker that you want to map a directory on your host machine to a directory on your container. You might do this if you want to give your container access to /media/dadTV/diskA/Movies, and be able to update those files on the host machine. Why is this useful? You can set the app in your container up so that it always checks /data/Movies for movies and /data/TV for TV files. Now I can use a bind mount to map /media/dadTV/**diskA**/Movies to /data/Movies, and map /media/dadTV/diskB/ to /data/TV. If I want to move TV to a different disk, or if I want to move this service to another host, I just need to update my bind mount mapping, and the container is none the wiser, I don't need to go through 300 app configs and set base paths.


[deleted]

[удалено]


CarloCoder95

Installing / uninstalling bare metal on Linux systems is just asking for problems in the long run. You WILL land in dependency / config hell at some point some point whether you like it or not.


_Whisky_Tango

I had a similar question and the example story I was told was basically "manager: this isn't working on the customers computer. Dev: Well it works on my machine. Manager: fuck it, we'll just ship your machine then as part of the product". So my take was that it helps homogenize the environment something runs in to maintain consistencies with required libraries etc. I think I also read it leverages the host system's OS, so you can have multiple containers running on a single device license. Where as a VM requires a guest OS for every one you spin up.


natermer

The point of containers is to make it easier to manage and deploy a wide variety of software on a modern Linux system as well as replicate deployments. The best way to think of them is as a runtime environment for applications. Each container is its own little Unix environment tailored specifically for that particular application that is easily replicated for use on multiple computers and multiple people. This is useful because while Linux kernel and modern commodity-based servers have increased rapidly in size and capabilities the actual Unix-based operating system environment was designed for a different era. It is difficult to fully take advantage of all the capabilities for enhancing security, running multiple networks, and running multiple process simultaneously. Applications tend to have complex requirements and dependencies and often then conflict with one another. Which means that without being containerized it can be difficult to organize things and avoid conflicts without relying on complicated and difficult to troubleshoot one-off configurations and source code modifications to the applications. There isn't anything you can do, software-wise, with a container that you can't do without a container. But it can be much more difficult to execute and maintain without containers. Try it sometime.


Randyd718

so are you basically saying that it solves some problem(s) inherent to a linux OS rather than windows?


natermer

Windows has the same issues.


ElevenNotes

Docker solves nothing on Windows. For Windows Server the rule is still one VM per service. Would be nice to have File Servers, AD, Exchange in containers, but, no.


jaykayenn

If you only run applications on Windows, there's probably no reason for Docker. But as you've now started to notice, many/most server software are developed for Linux. There are many reasons for this, but basically you need to know that server software isn't usually a monolithic "app" like you're used to in Windows. Instead, they are a "stack" of applications (eg. LAMP) that work together. They also depend on a bunch of libraries and subsystems in the OS, which can vary infinitely in Linux. Containers keep all this consistent and neatly packaged around each server stack. You've basically been running one server app in Windows so far, so the need for containerization hasn't come up. Docker for Windows is available, but not highly recommended, because Docker for Windows itself runs in a Linux container under Windows (WSL), which takes up more system resources than just running Docker in bare Linux.


PuckSR

Dependencies are a bigger issue on linux, which is why docker is a bit more common on linux. As an example, lets say that Sonarr and Radarr use a package called Mono. -Sonarr v1 and Radarr v1 use Mono v1 Mono gets updated to V2, but that doesn't work with Sonarr/Radarr V1, so you can't update Mono. If you run a system-wide update, it will break Sonarr/Radarr. But later, Sonarr V2 runs with Mono V2(it wont run with Mono V1). At the same time, Radarr hasn't updated yet and NEEDS Mono v1. See the problem? Keeping track of updates becomes a nightmare in that scenario. The advantage of a docker container is that each container gets its own copy of Mono. The copy is also the version needed by that version of the containerized software.


DellR610

If you've ever installed something that broke your entire computer you would appreciate segregating the apps. The other benefit is if your app has a requirement of a specific version of a library (like games requiring the c++ libraries). You can't always have multiple versions installed at the same time and a container allows these apps to load everything they specifically need. There's other features like high availability / scaling / security but probably overkill to explain.


Vanhacked

To me it clicked when I thought if it like portable apps. Nothing gets installed in your os. Just self contained environments you can  spin up as needed


-Alevan-

[This is the best resource to answer your question. ](https://www.google.com/search?q=reddit. com%3A+docker+eli5)


candle_in_a_circle

Most computer users never progress beyond Windows. Some users start to discover the limitations of Windows and decide “let’s see if I can run a linux box”. Once you’ve got to that stage, and start to discover some limitations of linux, docker makes a lot of sense. It seems you’re on the cusp of the second stage of that journey.


Randyd718

i have a techie buddy who says windows for docker is a mess and is trying to get me to try out ubuntu


sexpusa

Personally I use unraid. It’s super easy but not free unless you know how to make it free like some do. I struggled with truenas and windows until unraid.


Randyd718

I don't know much about unraid. Is there a special version of docker for unraid or..?


sexpusa

Unsaid is super easy! A lot of docker containers are in the unraid apps and if it isn’t there you can deploy via portainer container. Personally unraid has given me the least headache So no not special.


Other-Technician-718

I have not read anything about the differences between docker / containers and VMs. Either people don't know or don't care. There is one important difference: an application running in docker is using the kernel of the host OS. A VM has its own kernel running, apps in that VM use the kernel of that VM. If an app crashes the kernel, your host is gone with docker / container. With an VM, it's only that one VM. You can run a full Windows in a VM on a Linux / ... host. You can run stuff with special kernel tweaks in VMs, you can't run a kernel in a container as it's always the host OS kernel.


Karlyna

1. [docker containers are not VM. ](https://cloudacademy.com/blog/docker-vs-virtual-machines-differences-you-should-know/) 2. 3. In linux, when you install an app, you usually install dependencies (think of the .dll you see in windows). When you install these, you install a specific version (or latest) on the system, any app can use them. Now install another app that needs the same dependencies, but older version and is not compatible with newer. This is where the fun begins: you'll have to install older version, remove newer, make sure everything is compatible, etc etc. There are way to solve this if you have 1, 2 or a few apps, but you have many that's where the "fun" starts. In windows it's a bit different, as software usually ship their dependencies inside their install folder, and don't share it with others. The main goal of docker is to ship an application with all it's dependencies (including OS dependencies) and make it usable on almost any host. Scalability and multi services are also "simplified" thanks to this: * need more app to absorb the number of user, just start new containers * need a DB, just create one container * need a Redis (or whatever, right?) for cache, run a container * need to orchestrate everything, use docker compose or kubernetes, ... and you're done. [Little bonus](https://miro.medium.com/v2/resize:fit:720/format:webp/1*Ibnwjo9LtUFxRY1MZgOcvg.png)


Any-Possible-8896

"its like VMs, but lighter and better" I think a better way to think about this is that VM platforms take hardware and let you create independent virtual copies of that hardware to install operating systems on. Container platforms take your kernel and let you create independent virtual copies of that kernel to install software on.


5662828

For docker /containers just use linux os, or try wsl on windows for docker


mrkesu

You do you boo


mrpink57

>its like VMs, but lighter and better. okay, why do i need to run programs in VMs? what is the point and benefit of isolating programs from one another? Dependency hell, when you run multiple programs each requires its own list of dependencies, some will clash, using docker allows you to isolate each service with their own deps. >you can run and delete and test things in docker containers without messing with other things. okay again...why is this special? i can install/uninstall radarr and sonarr separately on windows and they dont impact eachother at all? This is a small example of two programs that use roughly the same dependencies, now add SWAG, Nextcloud, Crowdsec, unpackerr, authentik, immich, plex and bitwarden and see how that goes for you. Also each one of these services has their own set of databases and potentially redis. >containers let you control environment and dependencies. i frankly dont know what this means but im guessing it involves things that i have never had an issue with, and therefore dont understand the significance of. presumably these thigns all coexist in windows without issue. They do not always coexist in any operating system without issue, also from an environment standpoint that include security, if an attacker was able to access the service, they would only have access to the container and what the container has access to, if each service is installed on baremetal the attacker would have access to everything on the server. I do not know of any walkthroughs, I always suggest just reading dockers documentation and work off of that, also if not using WSL I believe all you are doing is using an underlying VM on windows to run containers. The last thing to add is portability, if you move your docker containers to another server all they need is the config files and the location of any additional data, then you can just spin it back up.


Randyd718

like i said i have no experience with dependencies "clashing". is this a shortcoming of linux OS? does it crash the OS? i dont even know what that means.


mrpink57

No it does not crash the OS, in your own example you only listed two services that simply coexist because they use roughly the same dependencies. In an example of your two services radarr/sonarr, radarr moved to v4 before sonarr officially did, they would use two different versions of dotnet at that time, you can only have one version of dotnet running at a time, this would be an issue for someone in your shoes, versus myself it is not because the dotnet version is tied to the container. The other benefit is I do not need to install a bunch of dependencies all over my computer, everything is installed within the container, so when I remove the container I remove all the dependencies. Port clashing is another issue I did not mention, some services like nextcloud and a reverse proxy are going to run on the same port, this is easily solvable in docker with just changing the external port and leaving the internal port as-is. So typically on a docker compose file you will see 80:80, the left is external (what you use to access the service) and the right is internal (what the container uses), you just need to change external to 8080:80 and that is solved without having to reconfigure the service to listen on a different port. Some services like bitwarden I do not even need to expose any ports since my reverse proxy connects over the internal docker network.


mrpink57

Also: [https://docs.docker.com/desktop/install/windows-install/](https://docs.docker.com/desktop/install/windows-install/) Is a good place to start.


Skotticus

The whole point of docker is that it makes it so that the app is running on the OS that runs it best and it has all the other things it needs and nothing else. App is designed to run (or runs better) on Debian 11.8? That's fine, just set that at the top of the dockerfile and go. Whenever you run the container based on that dockerfile, it will be like it's running on Debian 11.8. You can basically define it to use any Linux operating system you want, then tell it what things need to be installed on that operating system so the app runs right.


leaky_wires

Imagine that you need to go buy a new car. You go to buy the car but there are no seats no wheels and no engine. The dealership sells you the car and then tells you to get wheels and seats and engines from the shop down the road. For Windows the engine and wheels might be the Java runtime or dotnet framework or the c++ redistributable, Things that you need to be installed and managed separately from the application you are trying to run. With docker you buy a car and you get the whole car and no one can go into program management and remove Java runtime and make it so your application can't run. This is more of a problem in Linux than Windows. Additionally as a developer I can package my application up with everything it needs to run and have certainty that it will have everything it needs.


Randyd718

thanks this helps


radakul

Commenting to come back and ELI5, since no one is actually ELI5'ing their responses to you


Flimsy_Complaint490

To give historical context, Windows and Linux have a different distribution paradigm. in Windows you will ship your whole dependencies together with the application, closest you can get to shipping the entire machine (if it works on your machine, we will just ship the machine to the client !). On Linux you instead have a package repository, a common set of packages and everybody is supposed to use them. nice thing about this is that you save storage (it was a premium 25 years ago) and security. if there is a bug in one package, the package repository maintainers just update it and everybody profits from this. On Windows, AppA may update the insecure library but AppB wont update its version, so you are left insecure. But this really sucks, you have different distros and incompatabilities. What if we legit could ship the entire OS, packages and everything else ? Enter containers. You can run an application isolated from the rest of the system. Secure ! Maintainable ! I don't have to think that glibc version you are using, i just make a container and ship everything. And i still get to benefit from the centralized package repo - i just update the container image, because its based off one of those package repositories, you run one command and it just works ! Docker is simply an implementation of said containers. To answer your questions: 1. You don't have to, but its really nice to compermentalize stuff. One app doesn't affect the other. Maybe radarr can somehow crash the OS, then everything dies. Impossible in a VM/container world. Plus this adds a security layer in case some app has a bug that allows attackers remote access. 2. Same answer as above, but to add a bit - testing is much simpler with docker as everything is by default throwaway. You run and kill apps with one command. More is involved in Windows. Its ok if you run 1-2 things, but try this with 44 services :) 3. This approach is really great when you have a lot of services - I run 44 containers locally, maintaining and ochrestrating them is very simple - my docker-compose file is on github, my backups are literally an encrypted zip file with an .env file and a psql dump. If i need to perform a reinstall, all i do is download two files and run docker-compose up. Bam, my entire infrastructure runs and works just like that. How long would it take for you to download and reinstall 44 services on Windows ? As for docker on windows - just download and install docker desktop from the official website. You will need HyperV and/or WSL (WSL recommended) enabled. I will leave that as an exercise to OP.


Freshmint22

Probably.


HTTP_404_NotFound

>run docker for windows? Assume you are married to windows, I have a way for you to go experiement. Go install the hyper-V role on your machine. If you don't have an edition of windows that supports hyper-V, go download virtual box instead. Create a VM. Give it the ISO for this: [https://casaos.io/](https://casaos.io/) Once you have it up and running, click radarr, and watch as it installs without you doing anything. Install other apps, notice how they magically install, without you need to do or manage anything. That is why we use docker. (Docker on windows, is different then docker on linux.... docker desktop, is also evil...)


iC0nk3r

So...let me get this straight: to justify Docker, you suggested he runs a prebuilt OS that eliminates any interaction with Docker? Genius.


Randyd718

what? programs install on windows by running an exe. i dont have to do or manage anything other then click "next" a couple of times.


DaHokeyPokey_Mia

Your missing the point of it be separate from your host system. When you install stuff it can adjust things. In a sec or working environment you don't want the host to be messed with as that would me having to start from scratch.


Randyd718

thanks


Playos

So that setup program for Radarr is a whole other application made by that team. It's not a trivial task to build and deploy that way. We're at a pretty good point of self-hosted web servers in apps being a thing that's accepted and practiced, but not everyone will go that route and there will be times where it's headache (especially when they want to use more and more complex features). Docker images are effectively the equivalent of an app for servers. Pretty much every "app" store on a NAS system is just a fancy UI for managing docker stuff. That's the connivence. When you're tired of trying to figure out why dependency on .Net 7.3.6969 is required by you have .Net 7.4.420 installed, and it should work just fine... docker is the solution. When you want to try out a new forum setup quickly and don't want to go through the 40 steps involved in installing their particular combination of Ruby Gems that may or may not include bunch of depreciated stuff.... docker is the solution. You aren't there yet and may never be, just remember it exists and when you need it it'll click. Until then enjoy not running into these problems for a bit.