Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
We'll use a Dockerfile to build a new image, then run a new container based on that image.
Here's a Docker configuration file that builds a MongoDB service. Save it in a new directory, and name it
Dockerfile, with a capital "D" and no file name extension.
# Dockerfile for building a MongoDB service # Pull base image. FROM mongo # Define mountable directories. VOLUME ["/data/db"] # Define working directory. WORKDIR /data # Define default command. CMD ["mongod"] # Expose ports. # - 27017: process # - 28017: http EXPOSE 27017 EXPOSE 28017
FROMline specifies that this image will be based on another image, named
mongo, that has MongoDB already installed.
CMDline specifies a command that will be run when the container starts. In this case, it's the MongoDB service.
- And the
EXPOSElines expose network ports within the container to the host operating system, so that other apps can make network connections to the apps running inside the container.
Dockerfile saved to a directory, we can use the
docker command from our terminal to build an image that our Mongo containers will be based on. We just run the
docker command with the
build subcommand, and then pass the
-t flag to tag the image. We'll use an image name of "mongotest". Finally, we'll have it run in the current directory, which contains our
docker build -t mongotest .
Now, let's create a new container based on the image so we can try it out. We run the
docker command again, this time with a subcommand of
In order to be able to communicate with MongoDB, we need to add a couple things to the command line. Even though we exposed ports 27017 and 28017 in the Docker image, those ports won't be accessible unless we also publish them, that is, make them accessible via a port on the host OS. So let's publish the exposed port 27017 first. We need to pass the
-p flag, which stands for "publish",
27107, a colon, and the number of the exposed port on the container, which is also
27017. Then we'll do the same for port 28017:
-p 28017:28017. Lastly, we provide the name of the image we want to base our container on:
docker run -p 27017:27017 -p 28017:28017 mongotest
- Software Delivery Pipelines -- When an app is setup so that it’s easily sent through the process of build, test, and deployment. Often referred to as CI or CD (Continuous Integration or Continuous Delivery).
- Dockerized App -- An app that has a Dockerfile made for it and can be built into a Docker image and run as a container.
- Container -- You can think of a container for an app as a real-life shipping container for freight. An app container is also like a VM, but far more lightweight and with the same security and operational isolation from system resources.
You'd expect any software that could serve the needs of Facebook and
Google would be hard to use, right?
Using Docker is actually easy.
Here's a Docker configuration file that builds a Mongo DB service.
See the teacher's notes of you'd like to download a copy yourself.
This file should be named Docker file with the capital D and
no extension and saved in a directory on your host machine.
The docker file is used to set up an image, a self-contained package, that
includes an operating system and all the other dependencies your app needs to run.
Don't worry about all the details of this file right now, but
let me give you a quick overview of what it does.
The FROM line specifies that this image will be based on another image named Mongo
that has Mongo DB already installed.
The command line specifies a command that will be run when the container starts.
In this case, it's the MongoDB service.
And the exposed lines is expose network ports within the container to the host
This lets other apps make network connections to the apps
running inside the container.
With this docker file saved to a directory, we can use the docker command
from our terminal to build an image that our mongo containers will be based on.
We just run the docker command with the build subcommand and
then pass the -t flag to tag the image.
We'll use an image name of mongo test.
Finally we'll have it running the current directory which contains our docker file.
When we run this command, Docker will go through all the instructions in the Docker
file and carry them out.
A process it may take a little while.
It downloads the Mongo image to use as a base 6,
sets mongodb up to run by default and exposes our requested ports.
None of these changes are made to the host operating system by the way.
It's all happening inside the image.
Now let's create a new container based on the image so we can try it out.
We run the docker command again.
This time with a sub command of run.
In order to be able to communicate with Mongo DB,
we need to add a couple things to the command line.
Even though we exposed ports 27017 and 28017 in the Docker image,
those ports won't be accessible unless we also publish them.
That is, make them accessible via a port on the host OS.
So let's publish the exposed port 27017 first.
We need to pass the -p flag, which stands for publish, to docker run.
Then we need to type the port on the host that we want to publish the exposed
We could publish it on a different port number than we exposed, but
we'll just keep it the same and publish it at 27017.
Then we type a colon, and
the number of the exposed port on the container which is also 27017.
Then we'll do the same for port 28017,
publish 28017 as 28017.
Lastly, we provide the name of the image we want to base our container on which is
The container will be created.
And as we specified in our docker file,
the mongod command will be run within the container.
And because we published the two exposed ports, we can switch to another terminal
window and connect to those ports with the mongo client.
We can then interact with the service running on the container just as we would
So let's run the show dbs command.
And it shows our available databases.
And let's choose exit to exit out of the client.
But the Mongo service running in your container isn't just accessible from your
It can be reached over the network by any existing service.
If you want, it could be deployed into any production environment.
Now imagine applying the same process to all the other software
your organization runs.
No longer would your team to have to pass around brittle scripts and build files or
match many different dependencies together.
With Docker you can isolate your various services and deploy them with ease.
And when you need to scale up the number of instances or distribute
them over the network differently, Docker will make that easy too.
You need to sign up for Treehouse in order to download course files.Sign up