At Amdocs, our CRM microservices were deployed manually to AWS EC2 instances. Every release was a 45-minute process involving SSH sessions, config file edits, and crossed fingers. When I was tasked with fixing this, Docker became the obvious answer — but containerizing production Java services that had been running on bare metal for years came with its own challenges.
The Challenge
The Amdocs CRM platform for JCOM ran three core Java/Spring Boot microservices handling billing, order management, and service activation for 10M+ telecom subscribers. These services were deployed directly on EC2 instances through a manual process: an engineer would SSH into each server, pull the latest JAR from an artifact repository, update environment-specific configuration files, and restart the service.
This process had several compounding problems. Each deployment took approximately 45 minutes per service, which meant a full release cycle across all three services consumed over two hours of engineering time. Configuration drift between environments was common — staging and production had subtly different settings that occasionally caused bugs that were impossible to reproduce locally. And rollbacks required the same manual process in reverse, making incident recovery painfully slow.
The Goal
My objective was to containerize all three microservices using Docker, establish development-production parity through Docker Compose, and automate the deployment pipeline to AWS EC2. The constraints were strict: zero downtime during the transition, no changes to the application code itself, and the solution needed to be maintainable by a team that had no prior Docker experience.
The Approach
Multi-Stage Dockerfiles
The first challenge was image size. A naive Dockerfile that included the full JDK and Maven dependencies produced images over 800MB. I implemented multi-stage builds where the first stage compiled the application using a Maven image, and the second stage copied only the resulting JAR into a minimal JRE-based image. This reduced image sizes from 800MB+ to under 200MB.
FROM maven:3.8-eclipse-temurin-17 AS build
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:go-offline
COPY src ./src
RUN mvn package -DskipTests
FROM eclipse-temurin:17-jre-alpine
COPY --from=build /app/target/*.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
Docker Compose for Local Development
Configuration drift was solved by externalizing all environment-specific values into environment variables, managed through Docker Compose files. A single docker-compose.yml brought up all three services plus their Oracle database dependency locally, ensuring that every developer was running an identical stack. This eliminated the "works on my machine" class of bugs entirely.
Automated EC2 Deployment
For production, I set up a deployment script that built fresh images, pushed them to our container registry, SSH'd into EC2 instances to pull the new images, performed a health check on the new container before routing traffic to it, and kept the previous container available for instant rollback. The health check was critical — if the new container failed to respond on its health endpoint within 30 seconds, the script would automatically roll back.
The Impact
Deployment time dropped from 45 minutes to 8 minutes per service — an 82% improvement. Full release cycles that previously took 2+ hours now completed in under 25 minutes. Configuration drift was eliminated entirely since every environment ran identical container images with only environment variables differing. The rollback process went from a manual 45-minute procedure to a single command taking under 2 minutes.
Beyond the direct time savings, the team's confidence in deploying increased significantly. Where previously the team batched releases into biweekly windows to minimize risk, we moved to deploying multiple times per week. The Docker Compose setup also cut new developer onboarding time from two days of environment setup to a single docker-compose up command.
Key Takeaways
Multi-stage builds are non-negotiable for production Java images — the size difference directly impacts pull times and startup speed. Environment parity through Docker Compose catches configuration bugs before they reach production. Health checks and automated rollback are essential — containerization without them just makes bad deployments faster. And keeping the solution simple enough for non-Docker-experts to maintain is what makes the change stick long-term.
Questions about containerization? Reach me at [email protected]