Tensorflow GPU JupyterLab Docker Container — How to access files using Jupyter lab on localhost from a remote machine docker container

Pallawi
7 min readJun 15, 2020

--

Hi there, If you are reading this blog I know you have spent a good amount of time setting up a docker container on a machine. When I wrote my first blog on docker, my aim was to understand, get introduced to what a docker and container is. I wanted to know how I can run a few set of commands and get things running using docker because before I wrote this blog I have been asked multiple times to — ‘use docker’. So, I understood it just for knowing it, got it running on my computer.

But today my problem was something different. I have a machine which is used by more than 20 people and we all access the machine remotely. Now you may ask why do 20 people use the same machine. So the reply is there are 4 DGX GPU’s attached to it. Yes, we all train our models there. So for every project I work, I create an anaconda environment and then if the number of environments increases I land up forgetting their names. The result is messed up environment.

Since we access the machine remotely we do not have a screen to view our codes, debug or just see things running more conveniently.

To solve that issue we are going to use a docker container with Jupyter Lab. We will learn to access all the files from our remote folders using Jupyter lab and never mess with ours or anyone's environment.

Let’s call you machine, the one with the screen that you are probably using to read this blog- ‘localhost’

Let's call the machine without screen where you wish to run your docker container and view all the files and folder using Jupyter Lab as a remote machine.

Step 1: Enter the remote machine using SSH form localhost terminal

ssh -L <host port>:localhost:<remote port> user@remote

We need to run this command in our localhost terminal to enter inside the remote machine. You will be asked the password of the remote machine. I am sure you know that. Enter the password and fire enter.

ssh -L 9999:localhost:9999 remote_machine_name@10.165.25.256

Now you are into the remote machine, let me tell you what you just did, ssh provides the -L option to specify port forwarding and we must do it because we will run Jupyter notebook on a port. Mapping a remote port 9999 to localhost port 9999. I am using the same number for both the ports. You may choose different ports but do read about what number you can use. As we can not choose any random port, there is a range between which we can choose.

Step 2: We need to pull a docker image on the remote machine

docker pull [image name]

Let us pull a TensorFlow image which has support for GPU and Jupyter Lab and notebook, make sure you have docker installed before running the below commands

docker pull tensorflow/tensorflow:1.14.0-gpu-py3-jupyter
This is how the process looks when you run the above command.

Sometimes you might already be having the image, as I said the machine is used by 20 people and images might be common. So if the image is already present on the remote machine and you still pulled it. Do not worry nothing happens. You will just get a message that the “image is up to date”.

Step 3- Run the docker image to create a container on the remote machine

You can run the below command to check if the image was pulled correctly

docker images
I already had this image so the created time shows 12 months ago

Now let's run the container using the image you just pulled

docker run -p 9999:9999 --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=2 -it -v /path_of_remote_folder/:/path_of_folder_in_container/--name segmentation_dock tensorflow/tensorflow:1.14.0-gpu-py3-jupyter bash

You can replace these paths:

path_of_remote_folder == /remote_machine_home/folder_1/
path_of_folder_in_container == /container_folder_1/
segmentation_dock = any other you wish as it is the container name

Do not worry about the container_folder_1 it will automatically get created, once you run the above command.

We must always mount a volume or you can call it as a space on the remote machine to store the files that you need to use inside the container. while creating a container we must do it because if at any point of time we wish to stop and remove the container we should not lose the data.

Run the set of codes given below to make sure the container is up and running:

1. Run the below command to view if the container properly started

docker container ls -l
You will find something this, a container ID, Image name, Created time, Status, Ports, Name of container respectively

3. Enter inside the docker container

Use the below command to enter inside the docker

docker exec -it segmentation_dock bash

Now you are inside the container. Please go to the folder you created in the container.

Step 4 — Install Jupyter lab in the container

Install Jupyter Lab using the below command

pip install jupyter -U && pip install jupyterlab
I made a folder called codes in the container which can give me access to all the file present in my remote mounted volume. You can see a folder called deeplab which has an environment. I activated it and then I installed Jupyter Lab in the environment.

Step 5 — Fire Jupyter Lab inside the container

jupyter lab --ip=0.0.0.0 --port=9999 --allow-root
This is how your container screen looks now.

Step 6- Open a tab in google chrome and type the below command.

localhost:9999
This is how your tab looks now. You can view the env folder and a file called file.py

Now, this file.py is present in the remote machine path where you are accessing data from. We must always mount a volume while creating a container because if at any point of time we wish to stop and remove the container we should not lose the data.

Step 6— Come out of the container , back to the remote machine

To come out of the container , back to the remote machine press these keys on your keyboard

control p + control q

Step 7— Commit your container and generate a new image with all the saved dependencies

docker container ls
sudo docker commit [CONTAINER_ID] [new_image_name]
sudo docker commit 48f453a6ce6a my_new_image_segmentation_dock
sudo docker images

Step 8— Push the new committed image to docker hub

docker login --username=yourhubusername
enter the password
docker tag imageID yourhubusername/imagename:tag
  1. yourhubusername is your user name on docker hub.

2. ImageID is the image id of the committed id.

3. imagename is the name you want your image to have on docker hub. Always recommended to have a descriptive name.

4. The place where second tag is mentioned you can choose any name.

docker tag 12345645 pallawids/seg_dock:latestdocker push pallawids/seg_dock

Note : If you are running a container on local machine use the below command to view the jupyter lab

pip install jupyter -U && pip install jupyterlab
jupyter notebook --ip 0.0.0.0 --port 9999 --no-browser --allow-root

Open in browser:

http://localhost:9999

If you ran a jupyter notebook last night and was running a process whole night or forgot to close it. The next time you try opening a jupyter notebook from your container, you might be not allowed.

Please find the unclosed jupyter notebook and the clear the port.

jupyter notebook list
jupyter notebook stop 9999

Conclusion:

I struggled to do this whole process for a week. To get the correct set of command. I sought out for help and definitely got them in bits and pieces. But to get what I need I read a lot of blogs again and multiple stack overflow pages. I needed the GPU too and mounting a remote machine space. So now I am sorted. I hope this helped you too. I will never forget this as I have documented it. These set of codes and steps should be handy. Because we do not want to spend a lot of time setting up things.

Do give a clap if you find this helpful so that others may find it too.

Reference:

--

--

Pallawi

Computer Vision contributor. Lead Data Scientist @https://www.here.com/ Love Data Science.