Docker

Using the Docker
A short summary of useful docker commands are given in docker cheat sheet.

View existing docker containers
In order to view all existing docker containers run the following command: docker ps -a

Open a shell of a docker
In order to open a new shell of an existing docker, run this command in the server: docker exec -it DOCKER_NAME bash

Docker initialization
Pull the desired docker image (you may use DockerHub): The image contains all the relevant installations for our purposes. DockerHub has premade images for all uses (pytorch, tensorflow... etc)

docker pull tensorflow/tensorflow:latest-gpu

Or for pytorch:

docker pull pytorch/pytorch:latest

In order to open a new docker use the following command:

For Permuter1: docker run -it --gpus all --restart unless-stopped --mount type=bind,source=/storage/,target=/common_space_docker/ -p 7777:22 -p 5000:5000 -p 5001:5001 -p 5002:5002 --name DOCKER_NAME tensorflow/tensorflow:latest-gpu bash

For Permuter2: sudo nvidia-docker run -it --gpus all --mount type=bind,source=/storage/,target=/common_space_docker/ -p 7777:22 --name DOCKER_NAME pytorch/pytorch:latest bash

For Permuter3: Tensorflow: sudo docker run --gpus all -it --restart unless-stopped --shm-size=1024m --mount type=bind,source=/mnt/,target=/common_space_docker/ -p 8888:22  --name DOCKER_NAME tensorflow/tensorflow:latest-gpu  bash

Pytorch: sudo docker run -it --restart unless-stopped --mount type=bind,source=/mnt/,target=/common_space_docker/ -p 7777:22 --name pytorch_docker --gpus all pytorch/pytorch:latest bash

If you want you can limit CPUs (core by core) using: --cpuset-cpus

If you want you can also enlarge shared memory to use workers by passing flag "--shm-size=1024m"

For Permuter4: docker run -it --gpus all --restart unless-stopped --mount type=bind,source=/storage/,target=/common_space_docker/ -p 7777:22 -p 5000:5000 -p 5001:5001 -p 5002:5002 --name DOCKER_NAME tensorflow/tensorflow:latest-gpu  bash

Options:
 * bind maps a directory in the server to a directory in the docker. Hence in "source=XXXX" put a new directory for your docker. e.g. "/common_space/new_docker_env". The "target" attribute of mount should not be changed.
 * p maps ports of the server and the docker. For instance "-p 7777:22" maps the port 7777 of the server to port 22 of the docker.
 * name here choose the name of the new docker.

Arguments:
 * docker image path to docker image. e.g "tensorflow/tensorflow:latest-gpu".
 * command to run in the new opened docker. e.g "bash".

Docker restart
In order to reopen a closed docker use the following command docker restart DOCKER_NAME Note that once the docker is restarted, ssh services should be restarted as well using service ssh restart

ssh-server initialization
Initiate bash shell of the docker: docker exec -it DOCKER_NAME bash

Enable the ssh command, run the following command inside the docker container. apt update && apt install -y openssh-server mkdir /var/run/sshd echo 'root:' | chpasswd

The last command changes root password to .

The next lines are used to: 1. Permit root login 2. Update something in SFTP protocol to allow authentication.

sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config sed -i 's/Subsystem.*/Subsystem sftp internal-sftp/g' /etc/ssh/sshd_config service ssh restart

Some containers will have a long startup message, this will cause problems with Pycharm debugger, to erase this just modify /etc/bash.bashrc.

packages installation
apt-get update apt-get install git-core apt install r-base apt install vim less pip install --upgrade pip pip install --upgrade matplotlib scipy tqdm bunch commentjson pandas jupyter pip install --upgrade tensorflow tflearn scikit-learn tensorflow_datasets tensorflow_probability pip install --upgrade tensorflow torch torchvision

running a jupyter-notebook in the docker
Use the following command inside the docker to start a jupyter server that will be accessible in port. CUDA_VISIBLE_DEVICES=, jupyter notebook --port --no-browser --allow-root --ip 0.0.0.0 Then the web interface is accessible via :