Compare commits

...

17 Commits

Author SHA1 Message Date
zmy
714b9e9e4d add default config, fix flash issue 2021-07-16 22:31:12 +08:00
zmy
64cc2c57cf add watch support 2021-07-16 22:19:42 +08:00
zmy
9cdd860903 🐛 fix an bug when parsing from file 2020-12-24 11:08:57 +08:00
zmy
9d2faec7a5 🐛 fix bug on empty configuration file 2020-12-22 20:24:26 +08:00
zmy
742a717b49 🐛 fix bug on empty configuration file 2020-12-22 20:17:24 +08:00
zmy
9ec4d877eb 🐛 fix a bug 2020-12-22 20:04:55 +08:00
zmy
1a5e1c39c7 update readme 2020-12-22 19:51:23 +08:00
zmy
1d2ddb82be update readme 2020-12-22 19:49:23 +08:00
zmy
f389c9c3f0 a little improvments and update readme 2020-12-22 19:47:34 +08:00
zmy
1e80821bf8 add support for docker by a new configuration item redirect 2020-12-22 19:21:11 +08:00
zmy
2685e20782 bug fix 2020-12-17 20:02:34 +08:00
zmy
e06555d14f fix a bug 2020-12-17 19:55:29 +08:00
zmy
27604247af update readme 2020-12-14 18:39:52 +08:00
zmy
1f9984893b improve style 2020-12-14 18:34:51 +08:00
zmy
37a1cfe942 change version to 0.0.3 2020-12-14 18:20:56 +08:00
zmy
8ae743e166 fix a bug which do not show summed vmen in process section and some slight modifications 2020-12-14 18:19:49 +08:00
zmy
5f57991308 add support for fine-grained style control 2020-12-14 16:54:49 +08:00
6 changed files with 417 additions and 92 deletions

2
.gitignore vendored
View File

@ -3,6 +3,8 @@
__pycache__/ __pycache__/
*.py[cod] *.py[cod]
*$py.class *$py.class
try.py
.vscode/
# C extensions # C extensions
*.so *.so

View File

@ -6,7 +6,7 @@ A naive tool for observing gpu status and auto set visible gpu in python code.
1. install the package. 1. install the package.
```shell ```shell
pip install https://git.zmy.pub/zmyme/gpuutil/archive/v0.0.2.tar.gz pip install https://git.zmy.pub/zmyme/gpuutil/archive/v0.0.5.tar.gz
``` ```
2. for observing gpu status, just input 2. for observing gpu status, just input
@ -15,29 +15,45 @@ python -m gpuutil <options>
``` ```
when directly running ```python -m gpuutil```, you would probably get: when directly running ```python -m gpuutil```, you would probably get:
```text ```text
+---+------+------+---------+---------+------+---------------+ +----+------+------+----------+----------+------+----------------+
|ID | Fan | Temp | Pwr | Freq | Util | Vmem | | ID | Fan | Temp | Pwr | Freq | Util | Vmem |
+---+------+------+---------+---------+------+---------------+ +----+------+------+----------+----------+------+----------------+
| 0 | 22 % | 33 C | 4.47 W | 300 MHz | 0 % | 1569/11019 MiB| | 0 | 22 % | 21 C | 9.11 W | 300 MHz | 0 % | 3089/11019 MiB |
| 1 | 22 % | 35 C | 3.87 W | 300 MHz | 0 % | 3/11019 MiB| | 1 | 22 % | 23 C | 6.28 W | 300 MHz | 0 % | 786/11019 MiB |
| 2 | 22 % | 36 C | 8.22 W | 300 MHz | 0 % | 3/11019 MiB| | 2 | 38 % | 59 C | 92.04 W | 1890 MHz | 6 % | 3608/11019 MiB |
| 3 | 22 % | 36 C | 21.82 W | 300 MHz | 0 % | 3/11019 MiB| | 3 | 40 % | 67 C | 246.38 W | 1740 MHz | 93 % | 3598/11019 MiB |
+---+------+------+---------+---------+------+---------------+ +----+------+------+----------+----------+------+----------------+
[34860|0] user1(783 MiB) python train.py --some -args | Process Info |
[38694|0] user2(783 MiB) python train.py --some --other -args +----------------------------------------------------------------+
| [26107|0] user1(737 MiB) python |
| [34033|0,1] user2(1566 MiB) python |
| [37190|0] user2(783 MiB) python |
| [37260|0] user2(783 MiB) python |
| [30356|2] user3(3605 MiB) python train.py --args --some really |
| long arguments |
| [34922|3] user3(3595 MiB) python train.py --args --some really |
| long arguments version 2 |
+----------------------------------------------------------------+
``` ```
To get more information, run ```python -m gpuutil -h```, you would get: To get more information, run ```python -m gpuutil -h```, you would get:
```text ```text
python __main__.py -h usage: __main__.py [-h] [--profile PROFILE] [--cols COLS] [--style STYLE]
usage: __main__.py [-h] [--profile PROFILE] [--cols COLS] [--show-process SHOW_PROCESS] [--save] [--show-process SHOW_PROCESS] [--vertical VERTICAL] [--save]
optional arguments: optional arguments:
-h, --help show this help message and exit -h, --help show this help message and exit
--profile PROFILE, -p PROFILE --profile PROFILE, -p PROFILE
profile keyword, corresponding configuration are saved in ~/.gpuutil.conf profile keyword, corresponding configuration are saved in ~/.gpuutil.conf
--cols COLS, -c COLS colums to show --cols COLS, -c COLS colums to show.(Availabel cols: ['ID', 'Fan', 'Temp', 'TempMax', 'Pwr',
'PwrMax', 'Freq', 'FreqMax', 'Util', 'Vmem', 'UsedMem', 'TotalMem', 'FreeMem',
'Users']
--style STYLE, -sty STYLE
column style, format: |c|l:15|r|c:14rl:13|, c,l,r are align methods, | is line
and :(int) are width limit.
--show-process SHOW_PROCESS, -sp SHOW_PROCESS --show-process SHOW_PROCESS, -sp SHOW_PROCESS
whether show process or not whether show process or not
--vertical VERTICAL, -v VERTICAL
whether show each user in different lines. (show user vertically)
--save save config to profile --save save config to profile
``` ```
@ -58,6 +74,46 @@ def auto_set(num, allow_nonfree=True, ask=True, blacklist=[], show=True):
# some code here. # some code here.
``` ```
## Use this inside an docker.
For some reason, codes that running in docker cannot get the correct information about the process that using the gpu.
To support that, gpuutil supports read the output command of nvidia-smi and ps from an given file, which should be generated by you from host machine
To use this in docker, try the following steps:
1. figure out a way to pass the output of command ```nvidia-smi -q -x``` to the docker that your are currently using, save the output as a text file.
2. pass the output of a ps-like command to the docker. It is a table-like output, the first line is header, which should at least contains user, pid and command. below is an valid output generated by running ```ps -axo user,pid,command```on host machine:
```
USER PID COMMAND
root 1 /bin/bash -c bash /etc/init.docker; /usr/sbin/sshd -D
root 8 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root 9 sshd: user1 [priv]
user1 19 sshd: user1@pts/0
user1 20 -zsh
user1 97 tmux
user1 98 -zsh
```
if your generated output have different name, for example when you are using ```docker top``` instead of ```ps```, the ```COMMAND``` section would be ```CMD```, therefore you need prepare a dict that maps its name to either of ```user, pid, command```, note that its insensitive to upper case.
3. run the configuration script.
```shell
python -m gpuutil.set_redirect -nv path/to/your/nvidia/output -ps /path/to/your/ps/output -pst cmd=command,username=user
```
for more information about the script, run ```python -m gpuutil.set_redirect -h```, you will get:
```
usage: set_redirect.py [-h] [--nvsmi NVSMI] [--ps PS] [--ps_name_trans PS_NAME_TRANS]
optional arguments:
-h, --help show this help message and exit
--nvsmi NVSMI, -nv NVSMI
a file indicates real nvidia-smi -q -x output.
--ps PS, -ps PS a file indicates real ps-like output.
--ps_name_trans PS_NAME_TRANS, -pst PS_NAME_TRANS
a dict of name trans, format: name1=buildin,name2=buildin, buildin can be choosen from cmd,user,pid
```
> some advice:
> 1. you can use a script that run nvidia-smi and ps command and save their output to a directory, the mount the directory to the docker as readonly.
> 2. you could consider mount the directory as tmpfs.
## ps: ## ps:
1. you can get more detailed gpu info via accessing gpuutil.GPUStat class, for more information, just look the code. 1. You can get more detailed gpu info via accessing gpuutil.GPUStat class, for more information, just look the code.
2. Since it use ps command to get detailed process info, it can only be used on linux. 2. Since it use ps command to get detailed process info, it can only be used on linux, if you use it on windows, some information might be missing.
3. If you have any trouble, feel free to open an issue.
4. The code is straight forward, it's also a good choice to take an look at the code if you got any trouble.

View File

@ -1,6 +1,6 @@
from gpuutil import GPUStat import curses
from gpuutil import GPUStat, loaddict, savedict
import sys import sys
import json
import argparse import argparse
import os import os
@ -18,13 +18,39 @@ def load_config():
configpath = os.path.join(home_dir, '.gpuutil.conf') configpath = os.path.join(home_dir, '.gpuutil.conf')
if not os.path.isfile(configpath): if not os.path.isfile(configpath):
return {} return {}
with open(configpath, 'r', encoding='utf-8') as f: return loaddict(configpath)
return json.load(f)
def save_config(config): def save_config(config):
home_dir = os.path.expanduser('~') home_dir = os.path.expanduser('~')
configdir = os.path.join(home_dir, '.gpuutil.conf') configdir = os.path.join(home_dir, '.gpuutil.conf')
with open(configdir, 'w+', encoding='utf-8') as f: savedict(configdir, config)
json.dump(config, f, ensure_ascii=False, indent=4)
# style format: |c|l:15|r|c:14rl:13|
def parse_style(style):
if style is None:
return None, None
components = []
limits = []
while len(style) > 0:
ch = style[0]
if ch == '|':
components.append(ch)
style = style[1:]
continue
elif ch in ['l', 'r', 'c']:
limit = None
style = style[1:]
if style[0] == ':':
style = style[1:]
digits = ''
while style[0].isdigit():
digits += style[0]
style = style[1:]
if digits != '':
limit = int(digits)
components.append(ch)
limits.append(limit)
style = ''.join(components)
return style, limits
if __name__ == '__main__': if __name__ == '__main__':
stat = GPUStat() stat = GPUStat()
@ -33,13 +59,18 @@ if __name__ == '__main__':
recommended_cols = ['ID', 'Fan', 'Temp', 'Pwr', 'Freq', 'Util', 'Vmem'] recommended_cols = ['ID', 'Fan', 'Temp', 'Pwr', 'Freq', 'Util', 'Vmem']
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('--profile', '-p', default=None, type=str, help='profile keyword, corresponding configuration are saved in ~/.gpuutil.conf') parser.add_argument('--profile', '-p', default='default', type=str, help='profile keyword, corresponding configuration are saved in ~/.gpuutil.conf')
parser.add_argument('--cols', '-c', type=csv2list, help='colums to show') parser.add_argument('--cols', '-c', type=csv2list, help='colums to show.(Availabel cols: {0}'.format(avaliable_cols))
parser.add_argument('--style', '-sty', type=str, default=None, help='column style, format: |c|l:15|r|c:14rl:13|, c,l,r are align methods, | is line and :(int) are width limit.')
parser.add_argument('--show-process', '-sp', default=True, type=str2bool, help='whether show process or not') parser.add_argument('--show-process', '-sp', default=True, type=str2bool, help='whether show process or not')
parser.add_argument('--vertical', '-v', default=False, type=str2bool, help='whether show each user in different lines. (show user vertically)')
parser.add_argument('--save', default=False, action="store_true", help='save config to profile') parser.add_argument('--save', default=False, action="store_true", help='save config to profile')
parser.add_argument('--watch', '-w', default=-1, type=float, help='save config to profile')
args = parser.parse_args() args = parser.parse_args()
cols = args.cols if args.cols is not None else recommended_cols cols = args.cols if args.cols is not None else recommended_cols
show_process = args.show_process show_process = args.show_process
style, limit = parse_style(args.style)
vertical = args.vertical
unexpected_cols = [] unexpected_cols = []
for col in cols: for col in cols:
if col not in avaliable_cols: if col not in avaliable_cols:
@ -50,7 +81,10 @@ if __name__ == '__main__':
if args.save: if args.save:
params = { params = {
"cols": cols, "cols": cols,
"show-process": show_process "style": style,
"limit": limit,
"show-process": show_process,
"vertical": vertical
} }
profile = args.profile if args.profile is not None else input('Please input your profile name:\n>>> ') profile = args.profile if args.profile is not None else input('Please input your profile name:\n>>> ')
config = load_config() config = load_config()
@ -58,10 +92,56 @@ if __name__ == '__main__':
save_config(config) save_config(config)
elif args.profile is not None: elif args.profile is not None:
config = load_config() config = load_config()
if 'default' not in config:
config['default'] = {
"cols": cols,
"style": style,
"limit": limit,
"show-process": show_process,
"vertical": vertical
}
if args.profile in config: if args.profile in config:
params = config[args.profile] params = config[args.profile]
cols = params["cols"] cols = params["cols"]
show_process = params["show-process"] show_process = params["show-process"]
style = None
limit = None
vertical = False
if "style" in params:
style = params["style"]
if "limit" in params:
limit = params["limit"]
if "vertical" in params:
vertical = params["vertical"]
else: else:
raise ValueError('Profile do not exist.\nAvaliable Profiles:{0}'.format(','.join(list(config.keys())))) raise ValueError('Profile do not exist.\nAvaliable Profiles:{0}'.format(','.join(list(config.keys()))))
stat.show(enabled_cols = cols, show_command=show_process) info = stat.show(enabled_cols = cols, colsty=style, colsz=limit, vertical=vertical, show_command=show_process, tostdout=False)
if args.watch < 0:
print(info)
else:
from curses import wrapper
import time
def continuous_watch(stdscr, info):
curses.curs_set(0)
stdscr.clear()
stdscr.nodelay(True)
lasttime = time.time()
try:
while True:
c = stdscr.getch()
if c in [ord('q'), ord('Q')]:
break
curses.flushinp()
hint = "Interval: {0} S | CurrentTime: {1}".format(args.watch, time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
stdscr.erase()
stdscr.addstr(0, 0, hint + '\n' + info)
stdscr.refresh()
passed_time = time.time() - lasttime
if passed_time < args.watch:
time.sleep(args.watch - passed_time)
lasttime = time.time()
info = stat.show(enabled_cols = cols, colsty=style, colsz=limit, vertical=vertical, show_command=show_process, tostdout=False)
except KeyboardInterrupt:
curses.flushinp()
pass
wrapper(continuous_watch, info)

View File

@ -10,6 +10,31 @@ import platform
osname = platform.system() osname = platform.system()
def loadfile(path):
with open(path, 'r', encoding='utf-8') as f:
return f.read()
def savefile(path, content):
with open(path, 'w+', encoding='utf-8') as f:
return f.write(content)
def loaddict(path):
content = loadfile(path)
content = content.strip()
if len(content) != 0:
return json.loads(content)
else:
return {}
def savedict(path, dictionary):
content = json.dumps(dictionary, indent=4, ensure_ascii=False)
savefile(path, content)
def clean_split(line, delemeter=' '):
words = line.split(delemeter)
words = [w.strip() for w in words]
words = [w for w in words if w != '']
return words
def exe_cmd(command):
pipe = os.popen(command)
return pipe.read()
def xml2dict(node): def xml2dict(node):
node_dict = {} node_dict = {}
@ -25,10 +50,8 @@ def xml2dict(node):
node_dict[child.tag].append(xml2dict(child)) node_dict[child.tag].append(xml2dict(child))
return node_dict return node_dict
def parse_nvsmi_info(command='nvidia-smi -q -x'): def parse_nvsmi_info(nvsmixml):
pipe = os.popen(command) tree = ET.fromstring(nvsmixml)
xml = pipe.read()
tree = ET.fromstring(xml)
return xml2dict(tree) return xml2dict(tree)
def parse_gpu_info(stat): def parse_gpu_info(stat):
@ -140,7 +163,7 @@ def get_basic_process_info_linux():
lines = output.split('\n')[1:] lines = output.split('\n')[1:]
processes = {} processes = {}
for line in lines: for line in lines:
words = [p for p in line.split(' ') if p != ''] words = clean_split(line)
if len(words) < 3: if len(words) < 3:
continue continue
username = words[0] username = words[0]
@ -168,50 +191,125 @@ def get_basic_process_info_windows():
} }
return processes return processes
def draw_table(table, header_line = 0, c_align = 'r', h_align='c', delemeter = ' | ', joint_delemeter = '-+-'): def get_basic_process_info_by_file(filepath, col_name_trans=None):
# calculate max lengths. # suppose cmd is always at the last, and the previous lines have no space.
num_columns = len(table[0]) content = loadfile(filepath)
def cvt_align(align, num_columns): lines = content.split('\n')
if type(align) is str: header = clean_split(lines[0])
if len(align) == 1: interested = {
return [align] * num_columns 'user': None,
elif len(align) == num_columns: 'pid': None,
return list(align) 'command': None
else: }
raise ValueError('align flag length mismatch') if col_name_trans is None:
else: col_name_trans = {'cmd': 'command'}
return align for i, word in enumerate(header):
c_align = cvt_align(c_align, num_columns) word = word.lower()
h_align = cvt_align(h_align, num_columns) if word in col_name_trans:
max_lengths = [0] * num_columns word = col_name_trans[word]
if word in interested:
interested[word] = i
processes = {}
for line in lines[1:]:
words = clean_split(line)
pid = words[interested['pid']]
user = words[interested['user']]
cmd = ' '.join(words[interested['command']:])
processes[pid] = {
"user": user,
"command": cmd
}
return processes
def draw_table(table, rowsty=None, colsty=None, colsz = None):
def justify(s, align, width):
if align == 'c':
s = s.center(width)
elif align == 'r':
s = s.rjust(width)
elif align == 'l':
s = s.ljust(width)
return s
num_cols = len(table[0])
if rowsty is None:
rowsty = '|' + '|'.join(['c']*len(table)) + '|'
if colsty is None:
colsty = '|' + '|'.join(['c']*num_cols) + '|'
# check tables.
for row in table:
if len(row) != num_cols:
raise ValueError('different cols!')
col_width = [0] * num_cols
if colsz is None:
colsz = [None] * num_cols
# collect widths.
for row in table: for row in table:
for i, col in enumerate(row): for i, col in enumerate(row):
if len(col) > max_lengths[i]: col = str(col)
max_lengths[i] = len(col) width = max([len(c) for c in col.split('\n')])
width = sum(max_lengths) + num_columns * len(delemeter) + 1 if colsz[i] is not None and colsz[i] < width:
hline = '+' width = colsz[i]
hline += joint_delemeter.join(['-' * length for length in max_lengths]) if width > col_width[i]:
hline += '+\n' col_width[i] = width
info = hline # prepare vline.
for i, row in enumerate(table): vline = []
info += '|' colaligns = []
row_just = [] col_pos = 0
align = h_align if i <= header_line else c_align line_delemeter = '-'
for w, col, a in zip(max_lengths, row, align): content_delemeter = ' '
if a == 'c': for ch in colsty:
row_just.append(col.center(w)) if ch == '|':
elif a == 'l': vline.append('+')
row_just.append(col.ljust(w)) elif ch in ['c', 'l', 'r']:
elif a == 'r': colaligns.append(ch)
row_just.append(col.rjust(w)) vline.append('-' * col_width[col_pos])
info += delemeter.join(row_just) col_pos += 1
info += '|\n' vline = line_delemeter.join(vline)
if i == header_line: table_to_draw = []
info += hline row_pos = 0
info += hline for ch in rowsty:
return info if ch == '|':
table_to_draw.append("vline")
elif ch in ['c', 'l', 'r']:
table_to_draw.append(table[row_pos])
row_pos += 1;
strings = []
for row in table_to_draw:
if type(row) is str:
strings.append(vline)
continue
new_row = []
max_cols = 1
for word, align, width in zip(row, colaligns, col_width):
cols = []
lines = word.split('\n')
for line in lines:
while len(line) > 0:
cols.append(line[:width])
line = line[width:]
cols = [justify(col, align, width) for col in cols]
if len(cols) > max_cols:
max_cols = len(cols)
new_row.append(cols)
for cols, width in zip(new_row, col_width):
empty = ' ' * width
while len(cols) < max_cols:
cols.append(empty)
rows = list(zip(*new_row))
for row in rows:
cols_to_drawn = []
col_pos = 0
for ch in colsty:
if ch == '|':
cols_to_drawn.append('|')
elif ch in ['c', 'r', 'l']:
cols_to_drawn.append(row[col_pos])
col_pos += 1
strings.append(content_delemeter.join(cols_to_drawn))
return '\n'.join(strings)
class GPUStat(): class GPUStat():
def __init__(self): def __init__(self):
@ -223,13 +321,35 @@ class GPUStat():
self.cuda_version = '' self.cuda_version = ''
self.attached_gpus = '' self.attached_gpus = ''
self.driver_version = '' self.driver_version = ''
self.nvsmi_source = None
self.ps_source = None
self.ps_name_trans = None
self.load_configure()
def load_configure(self):
configuration_path = os.path.expanduser('~/.gpuutil.conf')
if os.path.isfile(configuration_path):
configuration = loaddict(configuration_path)
if 'redirect' in configuration:
if 'nvsmi_src' in configuration['redirect']:
self.nvsmi_source = configuration['redirect']['nvsmi_src']
if 'ps_src' in configuration['redirect']:
self.ps_source = configuration['redirect']['ps_src']
if 'ps_name_trans' in configuration['redirect']:
self.ps_name_trans = configuration['redirect']['ps_name_trans']
def get_process_info(self): def get_process_info(self):
if self.ps_source is not None:
return get_basic_process_info_by_file(self.ps_source, self.ps_name_trans)
if osname == 'Windows': if osname == 'Windows':
return get_basic_process_info_windows() return get_basic_process_info_windows()
elif osname == 'Linux': elif osname == 'Linux':
return get_basic_process_info_linux() return get_basic_process_info_linux()
def parse(self): def parse(self):
self.raw_info = parse_nvsmi_info('nvidia-smi -q -x') if self.nvsmi_source is None:
self.raw_info = parse_nvsmi_info(exe_cmd('nvidia-smi -q -x'))
else:
self.raw_info = parse_nvsmi_info(loadfile(self.nvsmi_source))
self.detailed_info = {} self.detailed_info = {}
for key, value in self.raw_info.items(): for key, value in self.raw_info.items():
if key != 'gpu': if key != 'gpu':
@ -239,15 +359,18 @@ class GPUStat():
value = [value] value = [value]
self.detailed_info[key] = [parse_gpu_info(info) for info in value] self.detailed_info[key] = [parse_gpu_info(info) for info in value]
self.process_info = self.get_process_info() self.process_info = self.get_process_info()
self.simplified_info = { self.simplified_info = {}
"driver_version": self.detailed_info["driver_version"], for key in self.detailed_info:
"cuda_version": self.detailed_info["cuda_version"], if key != "gpu":
"attached_gpus": self.detailed_info["attached_gpus"], self.simplified_info[key] = self.detailed_info[key]
"gpus": [simplify_gpu_info(stat) for stat in self.detailed_info["gpu"]] else:
} self.simplified_info["gpus"] = [simplify_gpu_info(stat) for stat in self.detailed_info["gpu"]]
self.cuda_version = self.simplified_info["cuda_version"] if "cuda_version" in self.simplified_info:
self.driver_version = self.simplified_info["driver_version"] self.cuda_version = self.simplified_info["cuda_version"]
self.attached_gpus = self.simplified_info["attached_gpus"] if "driver_version" in self.simplified_info:
self.driver_version = self.simplified_info["driver_version"]
if "attached_gpus" in self.simplified_info:
self.attached_gpus = self.simplified_info["attached_gpus"]
self.gpus = [] self.gpus = []
for i, gpu in enumerate(self.simplified_info["gpus"]): for i, gpu in enumerate(self.simplified_info["gpus"]):
for process in gpu['processes']: for process in gpu['processes']:
@ -255,7 +378,7 @@ class GPUStat():
gpu['id'] = i gpu['id'] = i
self.gpus.append(gpu) self.gpus.append(gpu)
def show(self, enabled_cols = ['ID', 'Fan', 'Temp', 'Pwr', 'Freq', 'Util', 'Vmem', 'Users'], show_command=True): def show(self, enabled_cols = ['ID', 'Fan', 'Temp', 'Pwr', 'Freq', 'Util', 'Vmem', 'Users'], colsty=None, colsz=None, show_command=True, vertical=False, tostdout=True):
self.parse() self.parse()
gpu_infos = [] gpu_infos = []
# stats = { # stats = {
@ -273,11 +396,23 @@ class GPUStat():
# "mem_free": stat['memory']['free'].split(' ')[0].strip() # "mem_free": stat['memory']['free'].split(' ')[0].strip()
# } # }
for gpu in self.gpus: for gpu in self.gpus:
process_fmt = '{user}({pid})' # process_fmt = '{user}({pid})'
process_info = ','.join([process_fmt.format( # process_info = ','.join([process_fmt.format(
user = proc['user'], # user = proc['user'],
# pid = proc['pid']
# ) for proc in gpu['processes']])
process_fmt = '{user}({pids})'
users_process = {}
for proc in gpu['processes']:
user = proc['user']
pid = proc['pid'] pid = proc['pid']
) for proc in gpu['processes']]) if user not in users_process:
users_process[user] = []
users_process[user].append(pid)
delemeter = ','
if vertical:
delemeter = '\n'
process_info = delemeter.join(process_fmt.format(user=user, pids = '|'.join(users_process[user])) for user in users_process)
info_gpu = { info_gpu = {
'ID': '{0}'.format(str(gpu['id'])), 'ID': '{0}'.format(str(gpu['id'])),
'Fan': '{0} %'.format(gpu['fan_speed'].split(' ')[0].strip()), 'Fan': '{0} %'.format(gpu['fan_speed'].split(' ')[0].strip()),
@ -307,31 +442,41 @@ class GPUStat():
for info in gpu_infos: for info in gpu_infos:
this_row = [info[key] for key in enabled_cols] this_row = [info[key] for key in enabled_cols]
info_table.append(this_row) info_table.append(this_row)
info = draw_table(info_table, header_line=0, delemeter=' | ', joint_delemeter='-+-', c_align=c_align) info = draw_table(info_table, rowsty='|c|{0}|'.format('c'*(len(info_table)-1)), colsty=colsty, colsz=colsz) + '\n'
if show_command: if show_command:
procs = {} procs = {}
for gpu in self.gpus: for gpu in self.gpus:
for proc in gpu['processes']: for proc in gpu['processes']:
pid = proc['pid'] pid = proc['pid']
proc['gpu'] = [str(gpu['id'])] proc['gpu'] = [str(gpu['id'])]
if type(proc['vmem']) is str:
try:
proc['vmem'] = int(proc['vmem'].split(' ')[0])
except:
proc['vmem'] = 0
if pid not in procs: if pid not in procs:
procs[pid] = proc procs[pid] = proc
else: else:
procs[pid]['gpu'].append(str(gpu['id'])) procs[pid]['gpu'].append(str(gpu['id']))
procs[pid]['vmem'] += proc['vmem']
proc_fmt = '[{pid}|{gpus}] {user}({vmem} MiB) {cmd}' proc_fmt = '[{pid}|{gpus}] {user}({vmem} MiB) {cmd}'
proc_strs = [] proc_strs = []
for pid in procs: for pid in procs:
this_proc_str = proc_fmt.format( this_proc_str = proc_fmt.format(
user = procs[pid]['user'], user = procs[pid]['user'],
vmem = procs[pid]['vmem'].split(' ')[0], vmem = procs[pid]['vmem'],
pid = procs[pid]['pid'].rjust(5), pid = procs[pid]['pid'].rjust(5),
cmd = procs[pid]['command'], cmd = procs[pid]['command'],
gpus = ','.join(procs[pid]['gpu']) gpus = ','.join(procs[pid]['gpu'])
) )
proc_strs.append(this_proc_str) proc_strs.append(this_proc_str)
proc_info = '\n'.join(proc_strs) proc_info = '\n'.join(proc_strs)
table_width = info.find('\n')
proc_info = draw_table([['Process Info'.center(table_width-4)], [proc_info]], rowsty="c|c|", colsty="|l|", colsz=[table_width-4])
info += proc_info info += proc_info
print(info) if tostdout:
print(info)
return info
class MoreGPUNeededError(Exception): class MoreGPUNeededError(Exception):
def __init__(self): def __init__(self):

42
gpuutil/set_redirect.py Normal file
View File

@ -0,0 +1,42 @@
import argparse
import os
from gpuutil import loaddict, savedict
availabel_name_trans = ['command', 'user', 'pid']
parser = argparse.ArgumentParser()
parser.add_argument('--nvsmi', '-nv', default=None, type=str, help='a file indicates real nvidia-smi -q -x output.')
parser.add_argument('--ps', '-ps', default=None, type=str, help='a file indicates real ps-like output.')
parser.add_argument('--ps_name_trans', '-pst', default=None, type=str, help='a dict of name trans, \
format: name1=buildin,name2=buildin, \
buildin can be choosen from {0}'.format(','.join(availabel_name_trans)))
args = parser.parse_args()
# lets chech the pst.
parsed_name_trans = {}
name_trans = args.ps_name_trans
if name_trans is not None:
name_trans = name_trans.split(',')
name_trans = [t.strip() for t in name_trans]
name_trans = [t for t in name_trans if t!='']
for item in name_trans:
item = item.split('=', maxsplit=1)
if len(item) != 2:
raise ValueError('there must be a = in nametrans')
key, value = item
if value not in availabel_name_trans:
raise ValueError('given buildin name {0} do not exist, avaliable: {1}'.format(value, ','.join(availabel_name_trans)))
parsed_name_trans[key] = value
config_file = os.path.expanduser('~/.gpuutil.conf')
configuration = {}
if os.path.isfile(config_file):
configuration = loaddict(config_file)
configuration['redirect'] = {
"nvsmi_src": args.nvsmi,
"ps_src": args.ps,
"ps_name_trans": parsed_name_trans
}
savedict(config_file, configuration)

View File

@ -2,7 +2,7 @@ from setuptools import setup, find_packages
setup( setup(
name = 'gpuutil', name = 'gpuutil',
version = '0.0.2', version = '0.0.5',
keywords='gpu utils', keywords='gpu utils',
description = 'A tool for observing gpu stat and auto set visible gpu in python code.', description = 'A tool for observing gpu stat and auto set visible gpu in python code.',
license = 'MIT License', license = 'MIT License',