Here’s what I got done this week:
2026 Week 3 (1/19-1/23)
- Friday 1/23
- Docker:
- Understanding the worst mistakes that are made with Docker and how to avoid them.
- Treating Containers like Virtual Machines
- Bloating with unnecessary processes or managing running containers instead of rebuilding.
- Only 1 or tightly couples processes per container, and use Docker exec for debugging.
- Running everything as root
- Exposes unnecessary security vulnerabilities.
- Create a switch to a non-root user in the Dockerfile
- Using secrets in images or compose files.
- Security risk if files get committed to version control.
- Secrets become part of the image layer history and are extremely difficult to revoke once leaked.
- Building large, inefficient images.
- Using large base images and installing unnecessary packages.
- Long build times, and wasted bandwidth and storage.
- Ignoring Volumes and Data persistence.
- Trying to store data in container filesystem.
- Storage is “ephemeral” which means when the container is stopped data is not persisted. External volumes are required to retain data.
- Built a container with a static html page served with nginx.
- Added a storage volume and a text file.
- Successfully persisted that file after deleting the container and building it again.
- Thinking about how to do logging and performance metrics for a container.
- Have another container running in parallel. These are the basic containers you need:
- Target (thing being measured)
- Collector (grabs metrics from your system)
- Visualizer (displays said metrics in a readable understandable way)
- Thinking through Volume storage in Fargate and ECS.
- AWS doesn’t allow EBS volumes to be attached in Fargate, so the standard solution is to use S3 for object store and backup, and EFS for a standard filesystem.
- Using Azure ACI you would use Azure Files to effect the same solution.
- Monday 1/19
- Deployed a getting started container built by Docker of a simple to-do app. Ran docker compose to build it all locally.
- Containers included:
- React JS front-end
- Node.js backend (API) to get and receive items
- MySQL database to store the items.
- php web interface
- Traefik proxy to send everything through localhost port 80.
- Brushed up on some Bash skills.
2026 Week 2 (1/12-1/18)
- Friday 1/16
- Published my install and run of Docker from the previous day.
- Studied Docker for 3 hours.
- Thursday 1/15
- Changed website design by customizing CSS.
- Studied Docker
- Installed Docker Desktop on a Ubuntu Linux VM
- Overcame several challenges with blocked virtualization from the host machine.
- Successfully ran the welcome container on localhost port 8080.
- Monday 1/12
- Tweaked website design: Added the status page and changed how the projects reacted when opened.
- Got assigned to watch a video by 37 signals about their data migration off of AWS (S3). Key takeaways from the video:
- Large cloud data transfers are difficult and costly. Moving that much data is a large endeavor.
- AWS allows a waiver for big data transfers. Basically an agreement that says if you fully complete the transfer in a specified time frame, they’ll refund the bandwidth costs with AWS credits.
- Transfers are either request limit or bandwidth limited. Bandwidth is generally an object size problem and request limits are an issue with many smaller objects.
- Separating data for transfers: must be testable, retriable, observable, etc.
- Infrastructure must be able to handle the transfer. You need to take into account reliability of wherever you’re transferring to.
2026 Week 1 (1/5-1/10)
- Began studying Docker
- Immersed myself in the fundamentals by watching a few videos. Here are some concepts I learned:
- VM vs. Container differences (Kernel/OS sharing, portability, etc.)
- Images vs. Containers
- Docker Registries (Docker Hub)
- Did a project using Terraform to provision a VPC and EC2 instance on AWS.
- Concluded studying for, and passed the Terraform Associate 003 exam.