1. Nvidia ๊ทธ๋ํฝ ๋๋ผ์ด๋ฒ ์ค์น
ํด๋นํ๋ ๊ทธ๋ํฝ์นด๋์ ํธํ์ด ๋๋ 367.4x๋ฒ์ ์ด์์ ์ต์ ๋ฒ์ ์ ๋๋ผ์ด๋ฒ๋ฅผ ์ค์นํ๋ค.
$ sudo apt-get update
$ sudo apt-get install nvidia-375
์ค์น๊ฐ ๋๋ ํ ์ฌ๋ถํ ํ๋ค.
๋๋ผ์ด๋ฒ ๋ฒ์ ๊ณผ ๊ทธ๋ํฝ์นด๋์ ์ ๋ณด๊ฐ ๋์ค๋์ง ํ์ธํ๋ค.
$ nvidia-smi
Wed Dec 20 15:40:51 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.98 Driver Version: 384.98 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 960 Off | 00000000:01:00.0 On | N/A |
| 22% 44C P5 13W / 160W | 251MiB / 1993MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1039 G /usr/lib/xorg/Xorg 169MiB |
| 0 1778 G compiz 76MiB |
| 0 2043 G /usr/lib/firefox/firefox 1MiB |
+-----------------------------------------------------------------------------+
2. CUDA Toolkit v8.0 ์ค์น
NVIDIA ์ฌ์ดํธ์์ ํ์ฌ ์ต์ ๋ฒ์ ์ v9.1์ด๋ฏ๋ก developer.nvidia.com/cuda-80-ga2-download-archive ์์ 8.0 ๋ฒ์ ์ ์ค์นํ๋ค. opengl์ ์ค์นํ์ง ์์์ผ ๋ฌดํ๋ถํ
์๋ฌ๊ฐ ๋์ง ์๋๋ค. Ctrl+Alt+F1 ์ผ๋ก ์ปค๋งจ๋ ๋ชจ๋์ ์ ์ํด ๋ค์ ๋ช
๋ น์ด๋ฅผ ์คํํ๋ค. (lightdm stop ๋ช
๋ น์ด๋ฅผ ์ด์ฉํ๋ฉด GUI๋ฅผ ์ฌ์ฉํ ์ ์๊ธฐ ๋๋ฌธ) ์ปค๋งจ๋ ๋ชจ๋์ ์ ์ํ๊ธฐ ์ ์ run ํ์ผ์ /home/์ ์ ๋ช
/ ํด๋๋ก ์ฎ๊ฒจ๋๋ ๊ฒ์ด ํธํ๋ค.
$ chmod 777 cuda_8.0.61_375.26_linux.run
$ sudo service lightdm stop
์ด๋ฏธ ๊ทธ๋ํฝ ๋๋ผ์ด๋ฒ๋ฅผ ์ค์นํ๊ธฐ ๋๋ฌธ์ NVIDIA Driver๋ฅผ ์ค์นํ๋ค๋ ์ง๋ฌธ์ no๋ผ๊ณ ์ ํํ๋ค. ์ค์น๊ฐ ๋ค ๋ ํ์๋ ํ๊ฒฝ๋ณ์ ์ค์ ์ ํ๋ค.
๋งจ ๋ง์ง๋ง ๋ถ๋ถ์ ์๋ ๋ด์ฉ์ ์ถ๊ฐํ๋ค. {PATH}์๋ ๊ธฐ์กด์ ์๋ ๋ด์ฉ์ ์ ์งํ๋ฉด ๋๋ค.
export PATH = /usr/local/cuda/bin:${PATH}
export LD_LIBRARY_PATH = /usr/local/cuda/lib64:${LD_LIBRARY_PATH}
๋ณ๊ฒฝ๋ ํ๊ฒฝ๋ณ์๋ฅผ ์ ์ฉํ๊ณ cuda ์ค์น์ฌ๋ถ๋ฅผ ํ์ธํ๋ค.
$ source ~/.bashrc
$ nvcc -Vnvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61
์๋ ๋ช
๋ น์ด๋ฅผ ํตํด cuda๊ฐ ์ด๋ ํด๋ ์ ์ฅ๋์ด ์๋์ง ์ ์ ์๋ค.
/usr/local/cuda/bin/nvcc
3. cuDNN v6.0 ์ค์น
NVIDIA ์ฌ์ดํธ์์ cuDNN v6.0 for CUDA 8.0 ์ ๋ค์ด๋ก๋ ๋ฐ๋๋ค. cuDNN v6.0 Library for Linux๋ฅผ ๋ค์ด๋ฐ์ผ๋ฉด ๋๋ค.
$ tar xzvf cudnn-8.0-linux-x64-v6.0.tgz
$ sudo cp cuda/lib64/* /usr/local/cuda/lib64/
$ sudo cp cuda/include/* /usr/local/cuda/include/
$ sudo chmod a+r /usr/local/cuda/lib64/libcudnn*
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h
๋ค์ ๋ช ๋ น์ด๋ฅผ ์คํํ์ ๋ ์๋์ ๊ฐ์ด ๋์ค๋ฉด ์ค์น๊ฐ ์ ๋ ๊ฒ์ด๋ค.
$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 6
#define CUDNN_MINOR 0
#define CUDNN_PATCHLEVEL 21
--
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
#include "driver_types.h"
+ NVIDIA CUDA Profiler Tools Interface ์ค์น
$ sudo apt-get install libcupti-dev
+ Bazel ์ค์น
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
$ sudo apt install curl
$ curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
$ sudo apt-get update && sudo apt-get install bazel
$ bazel versionBuild label: 0.9.0
Build target: bazel-out/k8-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Tue Dec 19 09:31:58 2017 (1513675918)
Build timestamp: 1513675918
Build timestamp as int: 1513675918
4. Anaconda Python 3.6 ์ค์น
Anaconda ์ฌ์ดํธ์์ ํ์ด์ฌ 3.6๋ฒ์ ์ 64bit ์๋์ฝ๋ค๋ฅผ ๋ค์ด๋ก๋ ๋ฐ๋๋ค. ์ค์น๋ฅผ ํ๋ฉด์ ๋ง์ง๋ง์ ํ๊ฒฝ๋ณ์๋ฅผ ์ ์ฉํ๋ค๋ ์ง๋ฌธ์ yes ๋ก ๋ตํด์ผ ํ๋ค. ์ค์น ํ์๋ ๋ณ๊ฒฝ๋ ํ๊ฒฝ๋ณ์๋ฅผ ์ ์ฉํ๋ค.
$ bash Anaconda3-5.0.1-Linux-x86_64.sh
$ source ~/.bashrc
ํ์ด์ฌ์ ์คํํ๋ฉด ์๋์ ๊ฐ์ด Anaconda๊ฐ ๋ฌ๋ค.
$ python
Python 3.6.3 |Anaconda, Inc.| (default, Oct 13 2017, 12:02:49)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
5. Tensorflow ์ค์น
Anaconda์ ์ด๋ฏธ tensorflow๊ฐ ์ค์น๋์ด ์์ผ๋ฏ๋ก GPU๋ฒ์ ๋ง ๋ค์ ์ค์นํด์ค๋ค.
// pip install ๋ก ์ค์นํ์ ๋์๋ RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6return f(*args, **kwds) ์๋ฌ๊ฐ ๋ฌ๊ธฐ ๋๋ฌธ์ uninstall ํ๊ณ ๋ค์ ์ค์นํด์ฃผ์์.
$ conda search tensorflow
$ conda install tensorflow-gpu
$ python
Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.Session()
2017-12-20 17:34:58.514172: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-12-20 17:34:58.514223: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-12-20 17:34:58.514241: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-12-20 17:34:58.514256: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-12-20 17:34:58.514283: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-12-20 17:34:58.763001: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-12-20 17:34:58.763244: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
name: GeForce GTX 960
major: 5 minor: 2 memoryClockRate (GHz) 1.304
pciBusID 0000:01:00.0
Total memory: 1.95GiB
Free memory: 1.65GiB
2017-12-20 17:34:58.763256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
2017-12-20 17:34:58.763260: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y
2017-12-20 17:34:58.763266: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:01:00.0)
<tensorflow.python.client.session.Session object at 0x7f63ba279828>
6. Pytorch ์ค์น
$ conda install pytorch torchvision -c pytorch
$ pythonPython 3.6.3 |Anaconda, Inc.| (default, Oct 13 2017, 12:02:49)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> A = torch.rand(3, 4)
>>> print(A)
0.7512 0.0381 0.1706 0.3747
0.5672 0.5980 0.3960 0.6589
0.6865 0.3870 0.0666 0.7997
[torch.FloatTensor of size 3x4]
17.12.20 ์์ฑ.
์ค์นํ๋ฉด์ ์ฐธ๊ณ ํ ์ฌ์ดํธ
1 ejklike.github.io/2017/03/06/install-tensorflow1.0-on-ubuntu16.04-1.html
2 newsight.tistory.com/92
3 tensorflow.blog/tag/pytorch/
'๐ก EE's DEV > ๋จธ์ ๋ฌ๋' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
[Tensorflow] MLP(Multi Layer Perceptron) classification with MNIST data (0) | 2018.09.24 |
---|