Following. We use DO but we're getting too many websites on one server so I'm considering moving to two app servers and a DB server behind a LB. Or possibly using replication on the DBs so we don't need a separate DB server. Setting this up is pretty straightforward with server experience but I don't want to break the power (and updates) of WOPS.

We currently offload backups with rsnapshot to another server but I have some interest in using S3. One downside we have is we don't have the space to back up our whole server before sending it off. Looks like doing a PUT to s3 for each site is doable but would be a lot more expensive.

We use AWS' ALB (Application Load Balancer) which allows me to auto-scale multiple servers. I'm working on a blog post and I'll update once it's ready.

You may run into bandwidth costs as well if you back up to S3. May want to see DO offers something so you can keep the bandwidth usage on your virtual network. I do that within AWS so I can transfer hundreds of gb of data very quickly and without any increased expense.

8 months later

Hi,
was this ever done and documented? Just wondering what file sharing you use so that updates/uploads are then changed on any no. of servers?
also do you use rds or something for db for a highly available stack?

Since 8 months ago, I've migrated off of the load balanced environment as it was proving to be unstable. I migrated to WPEngine for that project and it's running smoothly.

I would love to see someone else attempt a clustered environment through WordOps, as I have no doubt it's possible.

4 months later

I am also interested in running WordOps on a load balanced env. Either on AWS or Azure. If anyone has experience on this I am willing to hire as consultant on a project I have. Hit me up.

    12 days later

    I'd be interested in seeing what comes of that, emanweb - I was looking for something similar as well. One idea was just to get a docker-compose setup so WordOps could manage the configuration that would then be "baked into" some docker images and deployed as sort of immutable containers in a cluster.

    I did something similar with AMIs in AWS.

    The main issues I ran into were media storage (which I used EFS for) and plugin/theme/core updates

    I would have to make all the updates on a dev instance, then roll that dev instance out to production inside the auto scaling group

      jond1 thanks, yeah I was imaging having like a local dev setup, using WordOps to manage the nginx conf, then commit those nginx changes to git, and have some a CI/CD system (e.g. Google Cloud Build or GitHub Actions) then be responsible for creating a new image. From there, the image could get deployed to Kubernetes[1], Cloud Run[2], or loaded onto a new machine[3].

      For media storage, in my case I was thinking of using the WP-Stateless plugin [4] so rather than using local files it'd be using Google Cloud Storage. Something else that could be interesting is a filesystem backed by S3 or Cloud Storage, like https://juicefs.com/ (claims great performance, but still in beta).

      For plugin/them/core updates, I was thinking of using something like Roots Bedrock [5] wordpress setup which keeps the versions in a Composer file, and aiming for zero-downtime deployments via Deployer [6]. I'm envisioning that the actual site code (composer.json and custom theme/plugin files) would be in its own git repo, separate from the repo that has the nginx/docker bits, so that a) it'd be possible to have 1 infrastructure repo to many app repos, and 2) no need to have any CD/CD logic that decides what to build based on whether infra or app code has changed.

      I think the tricks would be setting it in such a way that a) the thing that builds & deploys the machines/containers knows how to pull in the app code for the various sites when the machine(s) come online and b) figuring out how to start up those machines, with their updated nginx configs and newly installed sites/apps, without having any/much downtime.

      1: https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build
      2: https://acaird.github.io/computers/2020/02/11/github-google-container-cloud-run
      3: https://dragonprogrammer.com/deploy-docker-container-gcp/
      4: https://wordpress.org/plugins/wp-stateless/
      5: https://github.com/roots/bedrock
      6: https://deployer.org/

      Sounds like a solid plan!

      8 days later

      Hi! I have done this before with EasyEngine v3 (pretty much the same as WordOps). I gave a talk about this (video: https://wordpress.tv/2016/11/22/russell-heimlich-scaling-up-wordpress-on-amazon-web-services/, slides: https://slides.russellheimlich.com/scaling-wordpress-on-aws/#/)

      The biggest thing to consider to get this to work is you can't rely on the local storage of the server. You need to distribute everything because if you bring up/kill servers they will get out of sync. I'll explain.

      1) Be prepared to have an external database. If you have multiple servers they can't each have their own copy of a database on it. Amazon RDS service is well worth the money. The service is managed and scalable to meet your needs without you needing to do anything. I had no problems with Amazon Aurora (it's MySQL compatible and cheaper pricing) https://aws.amazon.com/rds/aurora/?aurora-whats-new.sort-by=item.additionalFields.postDateTime&aurora-whats-new.sort-order=desc

      2) Store uploaded media in an S3 bucket. A free plugin like https://github.com/humanmade/S3-Uploads works great. When media gets uploaded to the media library the media is copied to an S3 bucket and the plugin rewrites the URLs for you automatically. This way you don't need to worry about keeping the local hard drives of your servers in sync with uploaded media items.

      3) If you're going to use page caching WordOps is great for this with their Redis caching option. All of the servers in your cluster can use the same cache. Amazon's Elasticache service is great for this (https://aws.amazon.com/elasticache/) You can get by with the smallest instance too.

      4) Be prepared to not use the typical plugin update from within the WordPress admin. Why? When you update a plugin the files will be copied to whatever server you happened to visit when you performed the update. One server will have an updated plugin file but other servers won't. In my situation it was ok that the entire site was version controlled and plugin updates were done locally, committed, and then deployed to the live servers.

      5) You need a Continuous Integration/Continuous Deployment (CI/CD) pipeline in place in order to get changes to your servers. My favorite approach to this was to have a build-specific version of your repo. When you push a commit to the main branch in your repo a process would kick off to
      - Compile Sass/JavaScript files
      - Update any dependencies from Composer or NPM
      - Commit the compiled assets directly to the build repo
      - Trigger the process of letting each server know there is an update and they should do a git pull to bring down the latest changes.

      I also gave a talk about how this CI/CD pipeline worked
      - Video:
      - Slides: https://slides.russellheimlich.com/circleci-aws/#/

      Once you get everything set up it is great to be able to setup rules to have your servers grow and contract based on traffic needs. If you have any questions or need any help feel free to reach out: kingkool68 at the gmail.

      8 days later

      Thanks for sharing that, @kingkool68. I like the relative simplicity of the system you describe in your talk. When I get time I'll work out how if I can use some of those ideas in GCP.

      The trickiest part seems to be that there might not be a 1-1 equivalent tool for AWS CodeDeploy in Google Cloud. I could use Cloud Builder [1] (seems to be the closest equivalent [2]) to build and deploy new docker images that contain the site code, but your model of just having CodeDeploy use git to fetch the new code seems like it'd be the fastest deployment solution, especially if the server(s) were hosting multiple websites (so, the only deployment steps being done would be site-specific, vs building a new docker image that contains N different sites).

      I have found one blog post [3] where someone describes trying to set up an agent-based deployment system similar to AWS, but on GCP, so that might be an option if there's nothing off the shelf.

      1: https://cloud.google.com/build
      2: https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison
      3:

      Hosted by VirtuBox