How SHOULD partners manage a cloud? :: mesosphere :: docker

Problem

In the (future) case of a localized erpnext, the maintaining partner - in order to serve the SaaS segment would have to spinup and maintain it’s own cloud platform.
Definitly, a good partner consists in good implementation services and good local funcional and framework knowledge, infrastructure questions are not his unique selling position.
As he does not dispose of such capabilities, for effectivley doing his job, he would require abstraction and ease of use.

Proposal

Set up a standard way of doing which makes ife of partners a lot easier:

  • Expose single node behaviour to the partner through mesosphere as a cloud agnostic (!) Data Center Operating System
  • Leverage the advantages of containerized applications by providing optimized and deterministc dockerimages for production (see interesting articles 1 & 2) and development workflows
  • Make use of mesosphere native docker support and spin up client images from a partner private registry, if the client requieres a private cluster - say 2 app-dockers

Seems like a proprietary platform.

Seems like a proprietary platform.

Copying from github issue,

Agreed about your partner attitude and that’s why we’re building the bench tool for managing development and production environments. It also has support for multiple sites running the same codebase.

Although we haven’t made any choices in terms of container technology
/ virtualization, it doesn’t limit you to try it out. I suggest that we
can discuss specific use cases and add relevant features to bench
itself.

Actually the underlying choice to make a container (ie. docker or cgroups) based deployment worflow is platfrom agnostic, so mesosphere licensemodel at this point of the decition process is not of a concern.
mesos/marathon and chronos are opensource, mesosphere just packs it nicely. Equivalent alternatives (though different architechure design) are coreos with fleet

Actually, I would rethink the role of bench in a docker world. From my point of view (which is a docker one) it should not enter into the realm of packaging and deployment. Nor should it add those features, because they are too high moving targets and other do it better.

I think one could start with a perfectly configured dockerfile already, the interest and dynamic will probably come by itself. A minimized and reduced-surface app container with preconfigured to be linked with a DB container and probably a load balancer container…

I’ve seen an intersting stageing/deployment workflow, which makes use of DEIS and CoreOS/fleet for nice and smooth distributed deployments, and it will work with mesos/marathon shortly. There are nice tools to get service-discovery from consul with progrium/registrator either on CoreOS/fleet/etcd or on mesos/marathon.

I think, personally and strategically, that because the benefits of dev-prod-parity, the dev/staging workflow for every developer within the ecosystem should already be within the distributed paradigm - You can run CoreOS (6MB in ram) easily on your local VM and use DEIS for containerization and propagation of your app, you can even connect a travis built to your DEIS api… well lots of cool stuff. I’ve read a lot of that stuff in recent days, and y truely believe it would be very beneficial.

As to multiple sites, on the same codebase, you would have a swarm of containers (ideally statically compiled) and propagated on your PaaS/IaaS behind a load balancer against a swarm of persistent database conatainers or - better - a platform distributed database service. I suppose the sites are just different databases, all file-bound persistent data would be stored in shared persistend volumes of the app containers. No big deal. The resource (memory) overhead is arround 6MB per node (in the case of CoreOS). But this could be VERY nicely be overcompensated by optimized containers. If statically compilation is too difficult you would only - and only - need a python interpreter and the dependencies on it, nothing more. Nothing! :wink: So it can be a busybox as base distro. I think the point here is about thinking distributed already in the dev workflow

This all sounds good and exciting, but I hope there will always be a low-tech install option for people with only basic knowledge.

I would love to be in a place where I could deploy servers algorithmically, but right now my server is a castoff I was able to hang onto after our last round of desktop upgrades. The people in my little skunkworks project are me and a helper, both part-time, so we’re not making progress exactly quickly, but there is no practical likelihood of making this an “official” project in the next three years, so we go as fast as we can. Having to spend more time learning deployment tools would just take time away from ERPNext.

Dale

P.s. Cloud is not a possibility for me, nor I suspect for many large organizations who don’t have access to the latest tech at the business unit level unless it’s part of an enterprise program. I’m current using the “Easy Way” script to install ERPNext on Centos 7 installed from a DVD.

Hi David_Arnold,
The Docker & Core OS sounds interesting.
I am just wondering (short of needing to deploy thousands of rapidly growing or shrinking instances) how this would be of an advantage over using Digital Ocean with CentOS droplets or using local homebew kit?
As ERPnext is aimed at small-medium business operators. A good month’s growth should be easily covered by either existing solution. I am personally looking forward to seeing what Bench/Frappe/ERPnext evolve into over the next 12 months and hope that Webnotes resources can be devoted to extending & optimising these already large ideas. Of course if creating optimised cloud deployment and load/speed optimisation is your thing than I am sure Dev’s will love using it.
Best Regards,
Rick

Yes that sounds doable with current tooling, just a few things on the top of my head,

  • You can separate the database from the app container (bench set-mariadb-host).
  • You will have to think something about the files. I think the most reliable solution would be to keep the files on the host machine and mount it in the bench container (ie the app container). This is a limitation at the moment and will be removed when we have a feature for remote hosting of files.

If you have experience with containers / docker / mesosphere, I think you can go ahead with it. We’ll support any specific issues you face.

@Dale_Scott and @System19

I really see the simple bench system to be supported by us for a long time as for now it’s the most sensible way to deploy for small organizations. Also, supporting only linux containers would mean killing support for BSD, OS X also.

David’s use case is more suitable for a partner who will have to support a lot of customers and probably most of them running customized codebases. For this use case, containers would give the most bang for the buck for computation resources with isolation it requires.

@pdvyas
Nice, thanks. The file persistens is the reason I won’t give it a go yet, because current solution is far from optimal. Actually you mount files from volume-only-containers, which themselves are ephemeral, only the underlying volumes are not, so you might end up having a stale volume somwhere without a attached container… They are working on that: Proposal: Add volume ls/inspect/rm/create commands by cpuguy83 · Pull Request #8484 · moby/moby · GitHub

Hi Dale, I completely agree on your point. Although, what you identified as overhead, is actually a learning extra which then results in a simpler deployment model. As the optimal paradigm here is Development Production Parity, there are some time benefits in doing things the docker way, also in a local environment.

Actually, what a dockerfile (and the thereoff built image reduces to) is, is a written and (almost deterrministically) reproducable history of the commands you hacked in, in order to set up your Centos and install from DVD. Think of like your stripping away this step completely, as we might have some best practice coded into the erpnext repository. Actually, it get’s simpler. The additional steps to use in the traditional one-server-case would be run CoreOS instead of CentOS and hack in docker run with an IMAGE which resides in a public repository.

@System19 @pdvyas

It is true, that the real power lies in the scaling case, but I wanted to point out, that what makes life simpler in the “big” world also makes life simpler in the “small” world.

On BSD, OS X, and Windows you might adopt a boot2docker workflow in a virtual machine. This strips off the practical need to maintain various distributions, as this is a sensible and efficient way to do run it. Not sure if this would be a real benefit in maintaining though.

A unique deployment workflow would strip of entropy out of this (for many) difficult topic, making it easier and more reliable (that is a benefit of bench, too). Stripping off the environment building, would save you a lot of apt-get and chef, puppet, ansibleor whatever orchestration work. This would make things easier for everyone, and more reliable, as best practice in setting up the host machine environment would be built into the dockerfile (hopefully). The configuration (database, filestore, networking, [and ressources - which is useful only in the scale case]) would be outside of the app environment, making everything much more transparent and more easily managable.

Actually, I wanted to rise with this discussion strategically awareness, of course, this kind of stuff comes with “changing costs”, but I’m convinced, that in the long run, the benefits outweight those changing costs for every use case. Of cores I agree, that’s important to have something like bench for now, but I hope you bear in mind, that on the middle time frame there is definitively a bright star rising, when deciding about improving bench (and to what extend).

For now, I really want to wait better persistent volume management, and see some results of docker swarm, docker compose, and docker machine in order to be able to see where the evolution is going to and how docker will embed with the other ecosystem tools. I think then it’s time to give it a try, not only an experimental one or in the style of a self-learning project :wink:

Hi David, I completely agree that docker abstracts and simplifies deployment, but I doubt I would benefit from the “learning extra” until my deployment is significantly more complex (e.g. a few more developers than just me, test and production servers, a test person, etc.).

It would be really great if you could post a workflow tutorial when you get it all figured out. Being able to follow your lead could certainly help change my mind.

Dale

We start with something like what the discourse project does.

They mount the “host” directory as a volume. So, we can make a docker image with all deps installed (MariaDB, redis, nginx, supervisor, etc) with the bench utility, put the frappe-bench dir (which holds all config and site files) & Mariadb data dir on the shared volume.

This would ease the “app deployment” part. (I am still wondering about the cost benefit of this. It’s eaiser option for RoR guys as setting up rails app requires very precise versions of ruby, rails, etc and is really a lot of work to setup from scratch. This is not the case with Frappe/ERPNext)

For the scaling people, we can have a docker image with only the “bench” ie. the app. They can setup their on MariaDB, redis, files containers and just configure the pure bench container to use these services.

1 Like

@Dale_Scott
We plan to set up similar development, deployment workflows for Odoo and ERPNext, based on Docker and some adjacent tooling. Once we get there, I’ll be happy to follow suit to this discussion and continue my “lobbying” work with a more practical example. Sound’s like a good idea!

@pdvyas The all-in-one-docker app is a common start, and it’s a first step. What this doesn’t comprend is the beauty of Development / Production Parity. As it is not the deployment paradigm of docker. But I think once Copmoser is out there, it might be very easy to set up the docker “flock” with redis-docker, mariadb-docker, nginx-docker. So you get exactly the same Out-Of-The-Box effect while doing things “the right way”. I think having supervisor bundled is no problem, as more advanced stuff add’s overkill for the “small” use case.

Anyhow, I think it is a very good decision, to put some basic docker stuff into the repo, this might already attract more discussion and probably people that are already further up the learning courve than I am :wink: Thanks!

2 Likes