Once upon a time, there was LAMP Stack… Linux, Apache, Mysql, Php/Python/Perl - That’s what I’ve used to start my Software Engineering journey. Things were simple, in a sense that there were less “options” to choose from. But today, I have to deploy multiple types of databases, event store + processing pipeline, and other behemoths to work on my application locally.

Well, maybe I can use docker-compose to keep things clean and tidy, re-producible for the other team members, but what about the compute resources usage? I don’t know about the M2/M3 users out there, but my Linux laptop either halts or burns my lap.

Soo, how ‘bout I offload some of the compute intensive stuffs to remote machine, or maybe cloud? Although, that introduces some other factors like cloud cost, internet connectivity…, what if you have an old machine laying around in your house? That’s exactly what I did at some point.

But there’re still questions and issues…

  1. Re-producibility: So, you have all the containers with their configurations specified in a docker-compose file, great! All the platform needs, including the other services you have as dependency (if you’re into micro-services) are now accessible to your application, and re-producible by simple docker compose up Boom, done! But what about the CLI tools, desktop apps, IDE extensions, and so-on? Maybe you or your teammates each have multiple machines for their development work. It’s not only the platform, or the application you’re working on should be containerized, but also the whole Dev Environment!

  2. Dev-Prod Parity: Are you willing to sacrifice the “Twelve-Factor App Methodology” -

    Twelve-Factor App methodology

    because of the constraints you have in your local dev setup? Not so ideal, right? In this day & age, typically your app should be ready for multi-instance deployment, or even multi-region for HA. It’s hard to anticipate what would happen in that kind-of production dynamic, if you only ever deployed a single instance of the app in docker compose manifest. Also, if your target deployment platform is Kubernetes (EKS, K3s, AKE, GKE, OKE, CIVO etc.), why docker compose!? why docker!? why container!? ahem… ok, I’ll explain the last one, I promise 🤞

  3. CNCF Landscape & Kubernetes: Have you seen the CNCF Landscape?

    Cloud Native Landscape

    Yah, it’s Massive! But my point is, many of those tools are not very easy to configure in simple compose files. They may not have any detail of the required setup, other than just a helm or Kubernetes manifests. That means, you’ll face some issues if your application architecture is not too simple, or the tools you use are from recent generation.

  4. Change-Build-Test (CBT) Performance: How do you containerize your app for dev environment? Maybe you don’t do that, instead use port bindings to expose all the dependencies you have on other services, and access them from your app, running as a local process. So, there’re 4 options here:

    1. Build your app locally, copy the final artifact to container image (using COPY in Dockerfile for example) - Cons: Increased container build context sync time - due to the nature of build artifacts (e.g., large in size, hard to sync only the delta of the change) Pros: Potentially re-uses the local artifacts that your IDE may have already created, reduced redundant work!
    2. Sync codebase (mount) inside container. In this case, the hot-reload kind of stuffs are intended to happen inside the container. This may not work though in windows machines due to some issues with filesystem event propagating to the container from host (not sure if the issue is now fixed or not). Cons: significant amount of redundant work by the IDE, and process (e.g., compiler, bundler, etc.) inside the container on each code change. Pros: only the particular changed file can be synced, greatly improves feedback loop in case the container runs on a remote host.
    3. Using Tilt, Scaffold, etc. Tilt for example, can be used to combine both (a) and (b) https://docs.tilt.dev/example_java.html Cons: debugging can be hard in certain language, runtimes - due to the fact that the application process runs inside the container. Pros: Fast enough feedback loop / CBT cycle, whether the deployment target resides on the same machine or remote (remote k8s cluster or docker context)
    4. No containerization. Cons: hard to match production environment Pros: bare-minimal overhead

I think we should talk / discuss more about the development environments of our software, it’s no-less important than the production environment, and should strive to improve it as much as possible. In the beginning, it may apparently feel like waste of time… but in the end, it may still look the same 😅, because the way we take things for granted, and tends to not be seeing how past actions leads to the present. As I have to work in DevOps / Platform Engineering stuffs, I had to solve some of dev environment issues not as “optional” but “mandatory” things. In the rest of this post, I’ll introduce or explain some of the tools and approaches that made my life much easier. Also, I think most of it can be applicable in a whole organization scale and save a lot of paid employee hours!

Kubernetes for Dev


Kubernetes, without any doubts, have a steep learning curve. Not everyone involved in a project should be well-versed in 100s of CRDs and yaml manifests!

Untitled

But after all these years, people still write Deployment, Service, Ingress manifests manually, under-utilizing projects like Kubevela(OAM)… anyway, that’s a different long discussion. This post is only about Dev Env & Dev Experience (DevX), and the first question someone may ask, WHY!? and the context of the question is I think obvious… if you’re working on a single application or micro-service, what’s the point of bringing all the overhead that comes with Kubernetes? Yes, K3s, minikube (with docker driver) like lightweight Kubernetes distributions are there, but still, it sounds like an overkill.

Firstly, I don’t think the compute overhead is significantly higher than what you would have with docker-compose. a bare minimal k3s or minikube setup (without metrics collection) may have 3-4 extra containers, each having very reasonable purpose (e.g., DNS resolve, K8s Api server, etc.) and don’t use much CPU or Memory in idle state. Also, a single k8s cluster can be shared among multiple developer for multiple projects dev stage workloads - with namespace level isolation or using vCluster like tools. That opens up the possibility to share large resource intensive deployments among projects, and more closely reach dev-prod parity.

The other thing is the learning curve. That’s a very valid argument but, ideally you should have simple interface to deliver your software like OAM, Knative like manifests, or K8s CRD - instead of having to deal with platform specific details. More precisely, we should build and use Software Delivery Platform (SDP) and make it forbidden to access K8s cluster directly. So, the learning curve should not be there, ideally.

Mirrord FTW