Compare commits

..

No commits in common. "master" and "v0.0.2" have entirely different histories.

6 changed files with 92 additions and 417 deletions

2
.gitignore vendored
View File

@ -3,8 +3,6 @@
__pycache__/
*.py[cod]
*$py.class
try.py
.vscode/
# C extensions
*.so

View File

@ -6,7 +6,7 @@ A naive tool for observing gpu status and auto set visible gpu in python code.
1. install the package.
```shell
pip install https://git.zmy.pub/zmyme/gpuutil/archive/v0.0.5.tar.gz
pip install https://git.zmy.pub/zmyme/gpuutil/archive/v0.0.2.tar.gz
```
2. for observing gpu status, just input
@ -15,45 +15,29 @@ python -m gpuutil <options>
```
when directly running ```python -m gpuutil```, you would probably get:
```text
+----+------+------+----------+----------+------+----------------+
| ID | Fan | Temp | Pwr | Freq | Util | Vmem |
+----+------+------+----------+----------+------+----------------+
| 0 | 22 % | 21 C | 9.11 W | 300 MHz | 0 % | 3089/11019 MiB |
| 1 | 22 % | 23 C | 6.28 W | 300 MHz | 0 % | 786/11019 MiB |
| 2 | 38 % | 59 C | 92.04 W | 1890 MHz | 6 % | 3608/11019 MiB |
| 3 | 40 % | 67 C | 246.38 W | 1740 MHz | 93 % | 3598/11019 MiB |
+----+------+------+----------+----------+------+----------------+
| Process Info |
+----------------------------------------------------------------+
| [26107|0] user1(737 MiB) python |
| [34033|0,1] user2(1566 MiB) python |
| [37190|0] user2(783 MiB) python |
| [37260|0] user2(783 MiB) python |
| [30356|2] user3(3605 MiB) python train.py --args --some really |
| long arguments |
| [34922|3] user3(3595 MiB) python train.py --args --some really |
| long arguments version 2 |
+----------------------------------------------------------------+
+---+------+------+---------+---------+------+---------------+
|ID | Fan | Temp | Pwr | Freq | Util | Vmem |
+---+------+------+---------+---------+------+---------------+
| 0 | 22 % | 33 C | 4.47 W | 300 MHz | 0 % | 1569/11019 MiB|
| 1 | 22 % | 35 C | 3.87 W | 300 MHz | 0 % | 3/11019 MiB|
| 2 | 22 % | 36 C | 8.22 W | 300 MHz | 0 % | 3/11019 MiB|
| 3 | 22 % | 36 C | 21.82 W | 300 MHz | 0 % | 3/11019 MiB|
+---+------+------+---------+---------+------+---------------+
[34860|0] user1(783 MiB) python train.py --some -args
[38694|0] user2(783 MiB) python train.py --some --other -args
```
To get more information, run ```python -m gpuutil -h```, you would get:
```text
usage: __main__.py [-h] [--profile PROFILE] [--cols COLS] [--style STYLE]
[--show-process SHOW_PROCESS] [--vertical VERTICAL] [--save]
python __main__.py -h
usage: __main__.py [-h] [--profile PROFILE] [--cols COLS] [--show-process SHOW_PROCESS] [--save]
optional arguments:
-h, --help show this help message and exit
--profile PROFILE, -p PROFILE
profile keyword, corresponding configuration are saved in ~/.gpuutil.conf
--cols COLS, -c COLS colums to show.(Availabel cols: ['ID', 'Fan', 'Temp', 'TempMax', 'Pwr',
'PwrMax', 'Freq', 'FreqMax', 'Util', 'Vmem', 'UsedMem', 'TotalMem', 'FreeMem',
'Users']
--style STYLE, -sty STYLE
column style, format: |c|l:15|r|c:14rl:13|, c,l,r are align methods, | is line
and :(int) are width limit.
--cols COLS, -c COLS colums to show
--show-process SHOW_PROCESS, -sp SHOW_PROCESS
whether show process or not
--vertical VERTICAL, -v VERTICAL
whether show each user in different lines. (show user vertically)
--save save config to profile
```
@ -74,46 +58,6 @@ def auto_set(num, allow_nonfree=True, ask=True, blacklist=[], show=True):
# some code here.
```
## Use this inside an docker.
For some reason, codes that running in docker cannot get the correct information about the process that using the gpu.
To support that, gpuutil supports read the output command of nvidia-smi and ps from an given file, which should be generated by you from host machine
To use this in docker, try the following steps:
1. figure out a way to pass the output of command ```nvidia-smi -q -x``` to the docker that your are currently using, save the output as a text file.
2. pass the output of a ps-like command to the docker. It is a table-like output, the first line is header, which should at least contains user, pid and command. below is an valid output generated by running ```ps -axo user,pid,command```on host machine:
```
USER PID COMMAND
root 1 /bin/bash -c bash /etc/init.docker; /usr/sbin/sshd -D
root 8 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root 9 sshd: user1 [priv]
user1 19 sshd: user1@pts/0
user1 20 -zsh
user1 97 tmux
user1 98 -zsh
```
if your generated output have different name, for example when you are using ```docker top``` instead of ```ps```, the ```COMMAND``` section would be ```CMD```, therefore you need prepare a dict that maps its name to either of ```user, pid, command```, note that its insensitive to upper case.
3. run the configuration script.
```shell
python -m gpuutil.set_redirect -nv path/to/your/nvidia/output -ps /path/to/your/ps/output -pst cmd=command,username=user
```
for more information about the script, run ```python -m gpuutil.set_redirect -h```, you will get:
```
usage: set_redirect.py [-h] [--nvsmi NVSMI] [--ps PS] [--ps_name_trans PS_NAME_TRANS]
optional arguments:
-h, --help show this help message and exit
--nvsmi NVSMI, -nv NVSMI
a file indicates real nvidia-smi -q -x output.
--ps PS, -ps PS a file indicates real ps-like output.
--ps_name_trans PS_NAME_TRANS, -pst PS_NAME_TRANS
a dict of name trans, format: name1=buildin,name2=buildin, buildin can be choosen from cmd,user,pid
```
> some advice:
> 1. you can use a script that run nvidia-smi and ps command and save their output to a directory, the mount the directory to the docker as readonly.
> 2. you could consider mount the directory as tmpfs.
## ps:
1. You can get more detailed gpu info via accessing gpuutil.GPUStat class, for more information, just look the code.
2. Since it use ps command to get detailed process info, it can only be used on linux, if you use it on windows, some information might be missing.
3. If you have any trouble, feel free to open an issue.
4. The code is straight forward, it's also a good choice to take an look at the code if you got any trouble.
1. you can get more detailed gpu info via accessing gpuutil.GPUStat class, for more information, just look the code.
2. Since it use ps command to get detailed process info, it can only be used on linux.

View File

@ -1,6 +1,6 @@
import curses
from gpuutil import GPUStat, loaddict, savedict
from gpuutil import GPUStat
import sys
import json
import argparse
import os
@ -18,39 +18,13 @@ def load_config():
configpath = os.path.join(home_dir, '.gpuutil.conf')
if not os.path.isfile(configpath):
return {}
return loaddict(configpath)
with open(configpath, 'r', encoding='utf-8') as f:
return json.load(f)
def save_config(config):
home_dir = os.path.expanduser('~')
configdir = os.path.join(home_dir, '.gpuutil.conf')
savedict(configdir, config)
# style format: |c|l:15|r|c:14rl:13|
def parse_style(style):
if style is None:
return None, None
components = []
limits = []
while len(style) > 0:
ch = style[0]
if ch == '|':
components.append(ch)
style = style[1:]
continue
elif ch in ['l', 'r', 'c']:
limit = None
style = style[1:]
if style[0] == ':':
style = style[1:]
digits = ''
while style[0].isdigit():
digits += style[0]
style = style[1:]
if digits != '':
limit = int(digits)
components.append(ch)
limits.append(limit)
style = ''.join(components)
return style, limits
with open(configdir, 'w+', encoding='utf-8') as f:
json.dump(config, f, ensure_ascii=False, indent=4)
if __name__ == '__main__':
stat = GPUStat()
@ -59,18 +33,13 @@ if __name__ == '__main__':
recommended_cols = ['ID', 'Fan', 'Temp', 'Pwr', 'Freq', 'Util', 'Vmem']
parser = argparse.ArgumentParser()
parser.add_argument('--profile', '-p', default='default', type=str, help='profile keyword, corresponding configuration are saved in ~/.gpuutil.conf')
parser.add_argument('--cols', '-c', type=csv2list, help='colums to show.(Availabel cols: {0}'.format(avaliable_cols))
parser.add_argument('--style', '-sty', type=str, default=None, help='column style, format: |c|l:15|r|c:14rl:13|, c,l,r are align methods, | is line and :(int) are width limit.')
parser.add_argument('--profile', '-p', default=None, type=str, help='profile keyword, corresponding configuration are saved in ~/.gpuutil.conf')
parser.add_argument('--cols', '-c', type=csv2list, help='colums to show')
parser.add_argument('--show-process', '-sp', default=True, type=str2bool, help='whether show process or not')
parser.add_argument('--vertical', '-v', default=False, type=str2bool, help='whether show each user in different lines. (show user vertically)')
parser.add_argument('--save', default=False, action="store_true", help='save config to profile')
parser.add_argument('--watch', '-w', default=-1, type=float, help='save config to profile')
args = parser.parse_args()
cols = args.cols if args.cols is not None else recommended_cols
show_process = args.show_process
style, limit = parse_style(args.style)
vertical = args.vertical
unexpected_cols = []
for col in cols:
if col not in avaliable_cols:
@ -81,10 +50,7 @@ if __name__ == '__main__':
if args.save:
params = {
"cols": cols,
"style": style,
"limit": limit,
"show-process": show_process,
"vertical": vertical
"show-process": show_process
}
profile = args.profile if args.profile is not None else input('Please input your profile name:\n>>> ')
config = load_config()
@ -92,56 +58,10 @@ if __name__ == '__main__':
save_config(config)
elif args.profile is not None:
config = load_config()
if 'default' not in config:
config['default'] = {
"cols": cols,
"style": style,
"limit": limit,
"show-process": show_process,
"vertical": vertical
}
if args.profile in config:
params = config[args.profile]
cols = params["cols"]
show_process = params["show-process"]
style = None
limit = None
vertical = False
if "style" in params:
style = params["style"]
if "limit" in params:
limit = params["limit"]
if "vertical" in params:
vertical = params["vertical"]
else:
raise ValueError('Profile do not exist.\nAvaliable Profiles:{0}'.format(','.join(list(config.keys()))))
info = stat.show(enabled_cols = cols, colsty=style, colsz=limit, vertical=vertical, show_command=show_process, tostdout=False)
if args.watch < 0:
print(info)
else:
from curses import wrapper
import time
def continuous_watch(stdscr, info):
curses.curs_set(0)
stdscr.clear()
stdscr.nodelay(True)
lasttime = time.time()
try:
while True:
c = stdscr.getch()
if c in [ord('q'), ord('Q')]:
break
curses.flushinp()
hint = "Interval: {0} S | CurrentTime: {1}".format(args.watch, time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
stdscr.erase()
stdscr.addstr(0, 0, hint + '\n' + info)
stdscr.refresh()
passed_time = time.time() - lasttime
if passed_time < args.watch:
time.sleep(args.watch - passed_time)
lasttime = time.time()
info = stat.show(enabled_cols = cols, colsty=style, colsz=limit, vertical=vertical, show_command=show_process, tostdout=False)
except KeyboardInterrupt:
curses.flushinp()
pass
wrapper(continuous_watch, info)
stat.show(enabled_cols = cols, show_command=show_process)

View File

@ -10,31 +10,6 @@ import platform
osname = platform.system()
def loadfile(path):
with open(path, 'r', encoding='utf-8') as f:
return f.read()
def savefile(path, content):
with open(path, 'w+', encoding='utf-8') as f:
return f.write(content)
def loaddict(path):
content = loadfile(path)
content = content.strip()
if len(content) != 0:
return json.loads(content)
else:
return {}
def savedict(path, dictionary):
content = json.dumps(dictionary, indent=4, ensure_ascii=False)
savefile(path, content)
def clean_split(line, delemeter=' '):
words = line.split(delemeter)
words = [w.strip() for w in words]
words = [w for w in words if w != '']
return words
def exe_cmd(command):
pipe = os.popen(command)
return pipe.read()
def xml2dict(node):
node_dict = {}
@ -50,8 +25,10 @@ def xml2dict(node):
node_dict[child.tag].append(xml2dict(child))
return node_dict
def parse_nvsmi_info(nvsmixml):
tree = ET.fromstring(nvsmixml)
def parse_nvsmi_info(command='nvidia-smi -q -x'):
pipe = os.popen(command)
xml = pipe.read()
tree = ET.fromstring(xml)
return xml2dict(tree)
def parse_gpu_info(stat):
@ -163,7 +140,7 @@ def get_basic_process_info_linux():
lines = output.split('\n')[1:]
processes = {}
for line in lines:
words = clean_split(line)
words = [p for p in line.split(' ') if p != '']
if len(words) < 3:
continue
username = words[0]
@ -191,125 +168,50 @@ def get_basic_process_info_windows():
}
return processes
def get_basic_process_info_by_file(filepath, col_name_trans=None):
# suppose cmd is always at the last, and the previous lines have no space.
content = loadfile(filepath)
lines = content.split('\n')
header = clean_split(lines[0])
interested = {
'user': None,
'pid': None,
'command': None
}
if col_name_trans is None:
col_name_trans = {'cmd': 'command'}
for i, word in enumerate(header):
word = word.lower()
if word in col_name_trans:
word = col_name_trans[word]
if word in interested:
interested[word] = i
processes = {}
for line in lines[1:]:
words = clean_split(line)
pid = words[interested['pid']]
user = words[interested['user']]
cmd = ' '.join(words[interested['command']:])
processes[pid] = {
"user": user,
"command": cmd
}
return processes
def draw_table(table, rowsty=None, colsty=None, colsz = None):
def justify(s, align, width):
if align == 'c':
s = s.center(width)
elif align == 'r':
s = s.rjust(width)
elif align == 'l':
s = s.ljust(width)
return s
num_cols = len(table[0])
if rowsty is None:
rowsty = '|' + '|'.join(['c']*len(table)) + '|'
if colsty is None:
colsty = '|' + '|'.join(['c']*num_cols) + '|'
# check tables.
for row in table:
if len(row) != num_cols:
raise ValueError('different cols!')
col_width = [0] * num_cols
if colsz is None:
colsz = [None] * num_cols
# collect widths.
def draw_table(table, header_line = 0, c_align = 'r', h_align='c', delemeter = ' | ', joint_delemeter = '-+-'):
# calculate max lengths.
num_columns = len(table[0])
def cvt_align(align, num_columns):
if type(align) is str:
if len(align) == 1:
return [align] * num_columns
elif len(align) == num_columns:
return list(align)
else:
raise ValueError('align flag length mismatch')
else:
return align
c_align = cvt_align(c_align, num_columns)
h_align = cvt_align(h_align, num_columns)
max_lengths = [0] * num_columns
for row in table:
for i, col in enumerate(row):
col = str(col)
width = max([len(c) for c in col.split('\n')])
if colsz[i] is not None and colsz[i] < width:
width = colsz[i]
if width > col_width[i]:
col_width[i] = width
# prepare vline.
vline = []
colaligns = []
col_pos = 0
line_delemeter = '-'
content_delemeter = ' '
for ch in colsty:
if ch == '|':
vline.append('+')
elif ch in ['c', 'l', 'r']:
colaligns.append(ch)
vline.append('-' * col_width[col_pos])
col_pos += 1
vline = line_delemeter.join(vline)
table_to_draw = []
row_pos = 0
for ch in rowsty:
if ch == '|':
table_to_draw.append("vline")
elif ch in ['c', 'l', 'r']:
table_to_draw.append(table[row_pos])
row_pos += 1;
strings = []
for row in table_to_draw:
if type(row) is str:
strings.append(vline)
continue
new_row = []
max_cols = 1
for word, align, width in zip(row, colaligns, col_width):
cols = []
lines = word.split('\n')
for line in lines:
while len(line) > 0:
cols.append(line[:width])
line = line[width:]
cols = [justify(col, align, width) for col in cols]
if len(cols) > max_cols:
max_cols = len(cols)
new_row.append(cols)
for cols, width in zip(new_row, col_width):
empty = ' ' * width
while len(cols) < max_cols:
cols.append(empty)
rows = list(zip(*new_row))
for row in rows:
cols_to_drawn = []
col_pos = 0
for ch in colsty:
if ch == '|':
cols_to_drawn.append('|')
elif ch in ['c', 'r', 'l']:
cols_to_drawn.append(row[col_pos])
col_pos += 1
strings.append(content_delemeter.join(cols_to_drawn))
return '\n'.join(strings)
if len(col) > max_lengths[i]:
max_lengths[i] = len(col)
width = sum(max_lengths) + num_columns * len(delemeter) + 1
hline = '+'
hline += joint_delemeter.join(['-' * length for length in max_lengths])
hline += '+\n'
info = hline
for i, row in enumerate(table):
info += '|'
row_just = []
align = h_align if i <= header_line else c_align
for w, col, a in zip(max_lengths, row, align):
if a == 'c':
row_just.append(col.center(w))
elif a == 'l':
row_just.append(col.ljust(w))
elif a == 'r':
row_just.append(col.rjust(w))
info += delemeter.join(row_just)
info += '|\n'
if i == header_line:
info += hline
info += hline
return info
class GPUStat():
def __init__(self):
@ -321,35 +223,13 @@ class GPUStat():
self.cuda_version = ''
self.attached_gpus = ''
self.driver_version = ''
self.nvsmi_source = None
self.ps_source = None
self.ps_name_trans = None
self.load_configure()
def load_configure(self):
configuration_path = os.path.expanduser('~/.gpuutil.conf')
if os.path.isfile(configuration_path):
configuration = loaddict(configuration_path)
if 'redirect' in configuration:
if 'nvsmi_src' in configuration['redirect']:
self.nvsmi_source = configuration['redirect']['nvsmi_src']
if 'ps_src' in configuration['redirect']:
self.ps_source = configuration['redirect']['ps_src']
if 'ps_name_trans' in configuration['redirect']:
self.ps_name_trans = configuration['redirect']['ps_name_trans']
def get_process_info(self):
if self.ps_source is not None:
return get_basic_process_info_by_file(self.ps_source, self.ps_name_trans)
if osname == 'Windows':
return get_basic_process_info_windows()
elif osname == 'Linux':
return get_basic_process_info_linux()
def parse(self):
if self.nvsmi_source is None:
self.raw_info = parse_nvsmi_info(exe_cmd('nvidia-smi -q -x'))
else:
self.raw_info = parse_nvsmi_info(loadfile(self.nvsmi_source))
self.raw_info = parse_nvsmi_info('nvidia-smi -q -x')
self.detailed_info = {}
for key, value in self.raw_info.items():
if key != 'gpu':
@ -359,18 +239,15 @@ class GPUStat():
value = [value]
self.detailed_info[key] = [parse_gpu_info(info) for info in value]
self.process_info = self.get_process_info()
self.simplified_info = {}
for key in self.detailed_info:
if key != "gpu":
self.simplified_info[key] = self.detailed_info[key]
else:
self.simplified_info["gpus"] = [simplify_gpu_info(stat) for stat in self.detailed_info["gpu"]]
if "cuda_version" in self.simplified_info:
self.cuda_version = self.simplified_info["cuda_version"]
if "driver_version" in self.simplified_info:
self.driver_version = self.simplified_info["driver_version"]
if "attached_gpus" in self.simplified_info:
self.attached_gpus = self.simplified_info["attached_gpus"]
self.simplified_info = {
"driver_version": self.detailed_info["driver_version"],
"cuda_version": self.detailed_info["cuda_version"],
"attached_gpus": self.detailed_info["attached_gpus"],
"gpus": [simplify_gpu_info(stat) for stat in self.detailed_info["gpu"]]
}
self.cuda_version = self.simplified_info["cuda_version"]
self.driver_version = self.simplified_info["driver_version"]
self.attached_gpus = self.simplified_info["attached_gpus"]
self.gpus = []
for i, gpu in enumerate(self.simplified_info["gpus"]):
for process in gpu['processes']:
@ -378,7 +255,7 @@ class GPUStat():
gpu['id'] = i
self.gpus.append(gpu)
def show(self, enabled_cols = ['ID', 'Fan', 'Temp', 'Pwr', 'Freq', 'Util', 'Vmem', 'Users'], colsty=None, colsz=None, show_command=True, vertical=False, tostdout=True):
def show(self, enabled_cols = ['ID', 'Fan', 'Temp', 'Pwr', 'Freq', 'Util', 'Vmem', 'Users'], show_command=True):
self.parse()
gpu_infos = []
# stats = {
@ -396,23 +273,11 @@ class GPUStat():
# "mem_free": stat['memory']['free'].split(' ')[0].strip()
# }
for gpu in self.gpus:
# process_fmt = '{user}({pid})'
# process_info = ','.join([process_fmt.format(
# user = proc['user'],
# pid = proc['pid']
# ) for proc in gpu['processes']])
process_fmt = '{user}({pids})'
users_process = {}
for proc in gpu['processes']:
user = proc['user']
process_fmt = '{user}({pid})'
process_info = ','.join([process_fmt.format(
user = proc['user'],
pid = proc['pid']
if user not in users_process:
users_process[user] = []
users_process[user].append(pid)
delemeter = ','
if vertical:
delemeter = '\n'
process_info = delemeter.join(process_fmt.format(user=user, pids = '|'.join(users_process[user])) for user in users_process)
) for proc in gpu['processes']])
info_gpu = {
'ID': '{0}'.format(str(gpu['id'])),
'Fan': '{0} %'.format(gpu['fan_speed'].split(' ')[0].strip()),
@ -442,41 +307,31 @@ class GPUStat():
for info in gpu_infos:
this_row = [info[key] for key in enabled_cols]
info_table.append(this_row)
info = draw_table(info_table, rowsty='|c|{0}|'.format('c'*(len(info_table)-1)), colsty=colsty, colsz=colsz) + '\n'
info = draw_table(info_table, header_line=0, delemeter=' | ', joint_delemeter='-+-', c_align=c_align)
if show_command:
procs = {}
for gpu in self.gpus:
for proc in gpu['processes']:
pid = proc['pid']
proc['gpu'] = [str(gpu['id'])]
if type(proc['vmem']) is str:
try:
proc['vmem'] = int(proc['vmem'].split(' ')[0])
except:
proc['vmem'] = 0
if pid not in procs:
procs[pid] = proc
else:
procs[pid]['gpu'].append(str(gpu['id']))
procs[pid]['vmem'] += proc['vmem']
proc_fmt = '[{pid}|{gpus}] {user}({vmem} MiB) {cmd}'
proc_strs = []
for pid in procs:
this_proc_str = proc_fmt.format(
user = procs[pid]['user'],
vmem = procs[pid]['vmem'],
vmem = procs[pid]['vmem'].split(' ')[0],
pid = procs[pid]['pid'].rjust(5),
cmd = procs[pid]['command'],
gpus = ','.join(procs[pid]['gpu'])
)
proc_strs.append(this_proc_str)
proc_info = '\n'.join(proc_strs)
table_width = info.find('\n')
proc_info = draw_table([['Process Info'.center(table_width-4)], [proc_info]], rowsty="c|c|", colsty="|l|", colsz=[table_width-4])
info += proc_info
if tostdout:
print(info)
return info
print(info)
class MoreGPUNeededError(Exception):
def __init__(self):

View File

@ -1,42 +0,0 @@
import argparse
import os
from gpuutil import loaddict, savedict
availabel_name_trans = ['command', 'user', 'pid']
parser = argparse.ArgumentParser()
parser.add_argument('--nvsmi', '-nv', default=None, type=str, help='a file indicates real nvidia-smi -q -x output.')
parser.add_argument('--ps', '-ps', default=None, type=str, help='a file indicates real ps-like output.')
parser.add_argument('--ps_name_trans', '-pst', default=None, type=str, help='a dict of name trans, \
format: name1=buildin,name2=buildin, \
buildin can be choosen from {0}'.format(','.join(availabel_name_trans)))
args = parser.parse_args()
# lets chech the pst.
parsed_name_trans = {}
name_trans = args.ps_name_trans
if name_trans is not None:
name_trans = name_trans.split(',')
name_trans = [t.strip() for t in name_trans]
name_trans = [t for t in name_trans if t!='']
for item in name_trans:
item = item.split('=', maxsplit=1)
if len(item) != 2:
raise ValueError('there must be a = in nametrans')
key, value = item
if value not in availabel_name_trans:
raise ValueError('given buildin name {0} do not exist, avaliable: {1}'.format(value, ','.join(availabel_name_trans)))
parsed_name_trans[key] = value
config_file = os.path.expanduser('~/.gpuutil.conf')
configuration = {}
if os.path.isfile(config_file):
configuration = loaddict(config_file)
configuration['redirect'] = {
"nvsmi_src": args.nvsmi,
"ps_src": args.ps,
"ps_name_trans": parsed_name_trans
}
savedict(config_file, configuration)

View File

@ -2,7 +2,7 @@ from setuptools import setup, find_packages
setup(
name = 'gpuutil',
version = '0.0.5',
version = '0.0.2',
keywords='gpu utils',
description = 'A tool for observing gpu stat and auto set visible gpu in python code.',
license = 'MIT License',