-2

These days it seems to be taken for granted that containers are being used for servers. I was a junior developer back then but I can recall some stuff. I'll lay out how things were and I'd like someone to add missing parts.

  1. create zip file (containing all the code + packages)
  2. ssh into server
  3. unzip it
  4. configure apache
  5. setup multiple servers (optional)
  6. implement auto scaling with configuration files... (really don't remember this part)
    • I believe this required something like Chef or similar program where you'd have a configuration file that EC2 could use, or have bash scripts that would basically run all the commands
Muhammad Umer
  • 249
  • 1
  • 2
  • 7
  • 2
    Typically you'd skip the "scaling" step. Most services don't need to scale. Many people would also skip the step where they use a dedicated server, and just pay someone to add their site to an Apache config on a machine that is monitored by professionals. Running your own infrastructure is a business decision, and usually, the answer to that decision is "no." – Simon Richter Apr 26 '21 at 10:42
  • fair point, is there a book/course you'd recommned to learn to manage server without cloud/docker/kubernettes parts – Muhammad Umer Apr 27 '21 at 11:33

1 Answers1

1

Linux servers (as you're mentioned ssh):

puppet, ansible, chef, salt, before that python / ruby scripts. Before that perl scripts. Before that bash scripts. Before that punch cards

DEB's and RPM's rather than ZIP files (so you could version deployments and use standard tooling). It's not too hard to say "require apache" in a deb/rpm and then drop a file in a sites-enabled or similar folder.

Auto scaling wasn't a thing until cloud environments.

Timothy c
  • 386
  • 1
  • 8