âą ~ 1 hour
âą ~ 1 hour
âą ~ 1 hour
âą ~ 1 hour
âą ~ 1 hour
âą ~ 1 hour
$ whoamimichael.herman
$ whoamimichael.herman
Software Engineer at ClickFox.
$ whoamimichael.herman
Software Engineer at ClickFox.
By the end of this talk, you should be able to...
By the end of this talk, you should be able to...
Containerization
By the end of this talk, you should be able to...
Containerization
Orchestration
Much of this tutorial comes from the following course I wrote at Testdriven.io...
Along with learning how to build the underlying code bases from each service, you'll also be introduced to more advanced topics like:
https://github.com/testdrivenio/testdriven-app-2.2/tree/pytn
Name | Container | Tech |
---|---|---|
Users API | users |
Flask, gunicorn |
Users DB | users-db |
Postgres |
Client | client |
React, React-Router |
Nginx | nginx |
Nginx |
e2e Tests | N/A | TestCafe |
https://github.com/testdrivenio/testdriven-app-2.2/tree/pytn
Name | Container | Tech |
---|---|---|
Users API | users |
Flask, gunicorn |
Users DB | users-db |
Postgres |
Client | client |
React, React-Router |
Nginx | nginx |
Nginx |
e2e Tests | N/A | TestCafe |
Fire up the app locally:
$ git clone https://github.com/testdrivenio/testdriven-app-2.2 \ --branch pytn --single-branch$ cd testdriven-app-2.2$ export REACT_APP_USERS_SERVICE_URL=http://localhost$ docker-compose -f docker-compose-dev.yml up -d --build
NOTE: Using Docker Machine? Replace localhost
above with DOCKER_MACHINE_IP
.
Fire up the app locally:
$ git clone https://github.com/testdrivenio/testdriven-app-2.2 \ --branch pytn --single-branch$ cd testdriven-app-2.2$ export REACT_APP_USERS_SERVICE_URL=http://localhost$ docker-compose -f docker-compose-dev.yml up -d --build
NOTE: Using Docker Machine? Replace localhost
above with DOCKER_MACHINE_IP
.
An orchestration tool for running multi-container apps.
Often, when developing applications with a microservice architecture, you cannot fully test out all services until you deploy to a staging server. This takes much too long to get feedback. Docker helps to speed up this process by making it easier to link together small, independent services locally.
Docker 101: http://mherman.org/docker-workshop
Pros
Pros
Cons
Pros
Cons
You need - Strong communication + docs, mature devops, lots of planning
More on microservices: https://testdriven.io/part-one-microservices
https://github.com/testdrivenio/testdriven-app-2.2/tree/pytn/services/users/project/db
Build image, run container:
$ docker-compose -f docker-compose-dev.yml up -d --build users-db
Test/Sanity Check:
$ docker exec -ti users-db psql -U postgres -W
https://github.com/testdrivenio/testdriven-app-2.2/tree/pytn/services/users
Build image, run container:
$ docker-compose -f docker-compose-dev.yml up -d --build users
Test/Sanity Check:
# create and seed the db$ docker-compose -f docker-compose-dev.yml \ run users python manage.py recreate_db$ docker-compose -f docker-compose-dev.yml run users python manage.py seed_db# run unit and integration tests$ docker-compose -f docker-compose-dev.yml run users python manage.py test
Navigate to http://localhost:5000 in your browser.
https://github.com/testdrivenio/testdriven-app-2.2/tree/pytn/services/client
Review the code, Dockerfile-dev, Dockerfile-prod, and docker-compose-dev.yml
Build image, run container:
# add env variable$ export REACT_APP_USERS_SERVICE_URL=http://localhost# build and run:$ docker-compose -f docker-compose-dev.yml up -d --build client
Test/Sanity Check:
Navigate to http://localhost:3007 in your browser
To test hot reload, first open the Docker logs:
$ docker-compose -f docker-compose-dev.yml logs -f [container-name]
To test hot reload, first open the Docker logs:
$ docker-compose -f docker-compose-dev.yml logs -f [container-name]
Make a change to the code, watch the logs update!
$ docker-compose -f docker-compose-dev.yml logs -f usersAttaching to usersusers | Waiting for postgres...users | PostgreSQL startedusers | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)users | * Restarting with statusers | * Debugger is active!users | * Debugger PIN: 133-923-613users | * Detected change in '/usr/src/app/project/api/auth.py', reloadingusers | * Restarting with statusers | * Debugger is active!users | * Debugger PIN: 133-923-613
To test hot reload, first open the Docker logs:
$ docker-compose -f docker-compose-dev.yml logs -f [container-name]
Make a change to the code, watch the logs update!
$ docker-compose -f docker-compose-dev.yml logs -f usersAttaching to usersusers | Waiting for postgres...users | PostgreSQL startedusers | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)users | * Restarting with statusers | * Debugger is active!users | * Debugger PIN: 133-923-613users | * Detected change in '/usr/src/app/project/api/auth.py', reloadingusers | * Restarting with statusers | * Debugger is active!users | * Debugger PIN: 133-923-613
For more helpful commands, review https://testdriven.io/part-one-workflow
https://github.com/testdrivenio/testdriven-app-2.2/tree/pytn/services/nginx
Review the code, Dockerfile-dev, and docker-compose-dev.yml
Build image, run container:
$ docker-compose -f docker-compose-dev.yml up -d --build nginx
Test/Sanity Check: navigate to http://localhost
https://github.com/testdrivenio/testdriven-app-2.2/tree/pytn/e2e
Review the code and docker-compose-prod.yml
Update the containers, then test:
$ docker-compose -f docker-compose-prod.yml up -d --build# add env variable$ export TEST_URL=http://localhost# create db$ docker-compose -f docker-compose-prod.yml \ run users python manage.py recreate_db# run tests$ testcafe chrome e2e
Assuming you already have an AWS account setup along with IAM and your AWS credentials are stored in an ~/.aws/credentials file, create a new host on an EC2 instance:
$ docker-machine create --driver amazonec2 pytn
Once done, set it as the active host and point the Docker client at it:
$ docker-machine env pytn$ eval $(docker-machine env pytn)
Assuming you already have an AWS account setup along with IAM and your AWS credentials are stored in an ~/.aws/credentials file, create a new host on an EC2 instance:
$ docker-machine create --driver amazonec2 pytn
Once done, set it as the active host and point the Docker client at it:
$ docker-machine env pytn$ eval $(docker-machine env pytn)
Grab the IP address associated with the new EC2 instance and use it to set the REACT_APP_USERS_SERVICE_URL
environment variable:
$ docker-machine ip pytn$ export REACT_APP_USERS_SERVICE_URL=http://DOCKER_MACHINE_IP
NOTE: The REACT_APP_USERS_SERVICE_URL
environment variable must be set at the build-time, so it is available before we kick off Create React App's production build process.
Set the secret key:
$ export SECRET_KEY=my_precious
Set the secret key:
$ export SECRET_KEY=my_precious
Build the images, spin up the containers:
$ docker-compose -f docker-compose-prod.yml up -d --build
Set the secret key:
$ export SECRET_KEY=my_precious
Build the images, spin up the containers:
$ docker-compose -f docker-compose-prod.yml up -d --build
Create and seed the database:
$ docker-compose -f docker-compose-prod.yml \ run users python manage.py recreate_db$ docker-compose -f docker-compose-prod.yml run users python manage.py seed_db
Set the secret key:
$ export SECRET_KEY=my_precious
Build the images, spin up the containers:
$ docker-compose -f docker-compose-prod.yml up -d --build
Create and seed the database:
$ docker-compose -f docker-compose-prod.yml \ run users python manage.py recreate_db$ docker-compose -f docker-compose-prod.yml run users python manage.py seed_db
Update the TEST_URL
environment variable and then run the e2e tests:
$ testcafe chrome e2e
For more, review https://docs.docker.com/machine/examples/aws/.
As you move from deploying containers on a single machine to deploying them across a number of machines, you need an orchestration tool to manage the arrangement and coordination of the containers across the entire system.
As you move from deploying containers on a single machine to deploying them across a number of machines, you need an orchestration tool to manage the arrangement and coordination of the containers across the entire system.
This is where ECS fits in along with a number of other orchestration tools - like Kubernetes, Mesos, and Docker Swarm.
As you move from deploying containers on a single machine to deploying them across a number of machines, you need an orchestration tool to manage the arrangement and coordination of the containers across the entire system.
This is where ECS fits in along with a number of other orchestration tools - like Kubernetes, Mesos, and Docker Swarm.
Which one?
ECS is simpler to set up and easier to use and you have the full power of AWS behind it, so you can easily integrate it into other AWS services (which we will be doing shortly). In short, you get scheduling, service discovery, load balancing, and auto-scaling out-of-the-box. Plus, you can take full advantage of EC2âs multiple availability-zones.
ECS is simpler to set up and easier to use and you have the full power of AWS behind it, so you can easily integrate it into other AWS services (which we will be doing shortly). In short, you get scheduling, service discovery, load balancing, and auto-scaling out-of-the-box. Plus, you can take full advantage of EC2âs multiple availability-zones.
If youâre already on AWS and have no desire to leave, then it makes sense to use AWS.
ECS is simpler to set up and easier to use and you have the full power of AWS behind it, so you can easily integrate it into other AWS services (which we will be doing shortly). In short, you get scheduling, service discovery, load balancing, and auto-scaling out-of-the-box. Plus, you can take full advantage of EC2âs multiple availability-zones.
If youâre already on AWS and have no desire to leave, then it makes sense to use AWS.
Keep in mind, that ECS is often lagging behind Kubernetes, in terms of features, though. If youâre looking for the most features and portability and you donât mind installing and managing the tool, then Kubernetes, Docker Swarm, or Mesos may be right for you.
ECS is simpler to set up and easier to use and you have the full power of AWS behind it, so you can easily integrate it into other AWS services (which we will be doing shortly). In short, you get scheduling, service discovery, load balancing, and auto-scaling out-of-the-box. Plus, you can take full advantage of EC2âs multiple availability-zones.
If youâre already on AWS and have no desire to leave, then it makes sense to use AWS.
Keep in mind, that ECS is often lagging behind Kubernetes, in terms of features, though. If youâre looking for the most features and portability and you donât mind installing and managing the tool, then Kubernetes, Docker Swarm, or Mesos may be right for you.
One last thing to take note of is that since ECS is closed-source, there isnât a true way to run an environment locally in order to achieve development-to-production parity. (LocalStack?)
Awesome comparison resource -> https://blog.kublr.com/choosing-the-right-containerization-and-cluster-management-tool-fdfcec5700df
Most orchestration tools come with a core set of features. You can find those features below along with the associated AWS service...
We'll either cover the features with a âď¸ directly or you'll see them in action from the demo.
The Elastic Load Balancer distributes incoming application traffic and scales resources as needed to meet traffic needs.
The Elastic Load Balancer distributes incoming application traffic and scales resources as needed to meet traffic needs.
It's one of (if not) the most important parts of your applications since it needs to always be up, routing traffic to healthy back-ends, and ready to scale at a momentâs notice.
The Elastic Load Balancer distributes incoming application traffic and scales resources as needed to meet traffic needs.
It's one of (if not) the most important parts of your applications since it needs to always be up, routing traffic to healthy back-ends, and ready to scale at a momentâs notice.
There are currently three types of Elastic Load Balancers to choose from. Weâll be using the Application Load Balancer since it provides support for path-based routing and dynamic port-mapping and it also enables zero-downtime deployments.
The Elastic Load Balancer distributes incoming application traffic and scales resources as needed to meet traffic needs.
It's one of (if not) the most important parts of your applications since it needs to always be up, routing traffic to healthy back-ends, and ready to scale at a momentâs notice.
There are currently three types of Elastic Load Balancers to choose from. Weâll be using the Application Load Balancer since it provides support for path-based routing and dynamic port-mapping and it also enables zero-downtime deployments.
Target Groups are attached to the Application Load Balancer and are used to route traffic to the containers found in the ECS service.
Listeners are added to the load balancer, which are then forwarded to a specific Target Group.
The Application Load Balancer is one of those AWS services that makes ECS so powerful. In fact, before itâs release, ECS was not a viable orchestration solution.
https://console.aws.amazon.com/ec2
Before you can start spinning up containers, you need to set up EC2 Container Registry (ECR), a private image registry. Once setup, you can then build, tag, and push images.
Before you can start spinning up containers, you need to set up EC2 Container Registry (ECR), a private image registry. Once setup, you can then build, tag, and push images.
Set up the following images at https://console.aws.amazon.com/ecs:
pytn-users
pytn-users_db
pytn-client
Before you can start spinning up containers, you need to set up EC2 Container Registry (ECR), a private image registry. Once setup, you can then build, tag, and push images.
Set up the following images at https://console.aws.amazon.com/ecs:
pytn-users
pytn-users_db
pytn-client
Why did we leave out Nginx?
Before you can start spinning up containers, you need to set up EC2 Container Registry (ECR), a private image registry. Once setup, you can then build, tag, and push images.
Set up the following images at https://console.aws.amazon.com/ecs:
pytn-users
pytn-users_db
pytn-client
Why did we leave out Nginx?
When tagging your images, you should think about version control (using the SHA1 to tie the image back to a specific commit) as well as the environment (development, staging, production) the image should belong to.
/$PROJECT/$ENVIRONMENT:$SHA1
The Elastic Container Service (ECS) has four main components:
The Elastic Container Service (ECS) has four main components:
Task Definitions
=> Tasks
=> Services
=> Clusters
Task Definitions define which containers make up the overall application and how much resources are allocated to each container. You can think of them as blueprints.
Task Definitions define which containers make up the overall application and how much resources are allocated to each container. You can think of them as blueprints.
Services instantiate the containers from the Task Definitions and run them on EC2 boxes within an ECS Cluster. Such instances are called Tasks.
Task Definitions define which containers make up the overall application and how much resources are allocated to each container. You can think of them as blueprints.
Services instantiate the containers from the Task Definitions and run them on EC2 boxes within an ECS Cluster. Such instances are called Tasks.
An ECS Cluster is just a group of EC2 container instances managed by ECS.
The health checks are the last line of defense after your unit, integration, and functional tests.
What's next?
What's next?
What's next?
What's next?
âď¸
âą ~ 1 hour
Keyboard shortcuts
â, â, Pg Up, k | Go to previous slide |
â, â, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |