Compare commits

..

No commits in common. "master" and "v0.0.1" have entirely different histories.

6 changed files with 59 additions and 577 deletions

2
.gitignore vendored
View File

@ -3,8 +3,6 @@
__pycache__/
*.py[cod]
*$py.class
try.py
.vscode/
# C extensions
*.so

View File

@ -5,56 +5,24 @@ A naive tool for observing gpu status and auto set visible gpu in python code.
## How to use
1. install the package.
```shell
pip install https://git.zmy.pub/zmyme/gpuutil/archive/v0.0.5.tar.gz
```
pip install https://git.zmy.pub/zmyme/gpuutil/archive/v0.0.1.tar.gz
```
2. for observing gpu status, just input
```shell
```
python -m gpuutil <options>
```
when directly running ```python -m gpuutil```, you would probably get:
```text
+----+------+------+----------+----------+------+----------------+
| ID | Fan | Temp | Pwr | Freq | Util | Vmem |
+----+------+------+----------+----------+------+----------------+
| 0 | 22 % | 21 C | 9.11 W | 300 MHz | 0 % | 3089/11019 MiB |
| 1 | 22 % | 23 C | 6.28 W | 300 MHz | 0 % | 786/11019 MiB |
| 2 | 38 % | 59 C | 92.04 W | 1890 MHz | 6 % | 3608/11019 MiB |
| 3 | 40 % | 67 C | 246.38 W | 1740 MHz | 93 % | 3598/11019 MiB |
+----+------+------+----------+----------+------+----------------+
| Process Info |
+----------------------------------------------------------------+
| [26107|0] user1(737 MiB) python |
| [34033|0,1] user2(1566 MiB) python |
| [37190|0] user2(783 MiB) python |
| [37260|0] user2(783 MiB) python |
| [30356|2] user3(3605 MiB) python train.py --args --some really |
| long arguments |
| [34922|3] user3(3595 MiB) python train.py --args --some really |
| long arguments version 2 |
+----------------------------------------------------------------+
where options can either be "brief" or "detail", and you will get something like
```
To get more information, run ```python -m gpuutil -h```, you would get:
```text
usage: __main__.py [-h] [--profile PROFILE] [--cols COLS] [--style STYLE]
[--show-process SHOW_PROCESS] [--vertical VERTICAL] [--save]
optional arguments:
-h, --help show this help message and exit
--profile PROFILE, -p PROFILE
profile keyword, corresponding configuration are saved in ~/.gpuutil.conf
--cols COLS, -c COLS colums to show.(Availabel cols: ['ID', 'Fan', 'Temp', 'TempMax', 'Pwr',
'PwrMax', 'Freq', 'FreqMax', 'Util', 'Vmem', 'UsedMem', 'TotalMem', 'FreeMem',
'Users']
--style STYLE, -sty STYLE
column style, format: |c|l:15|r|c:14rl:13|, c,l,r are align methods, | is line
and :(int) are width limit.
--show-process SHOW_PROCESS, -sp SHOW_PROCESS
whether show process or not
--vertical VERTICAL, -v VERTICAL
whether show each user in different lines. (show user vertically)
--save save config to profile
================== GPU INFO ==================
[0] Utils: 94 % | Mem: 10166/11019 MiB(853MiB free) user1(10163MiB,pid=14018)
[1] Utils: 89 % | Mem: 6690/11019 MiB(4329MiB free) user2(6687MiB,pid=19855)
[2] Utils: 0 % | Mem: 1/11019 MiB(11018MiB free)
[3] Utils: 0 % | Mem: 1/11019 MiB(11018MiB free)
================ PROCESS INFO ================
[14018] user1(10163 MiB) python train.py --some -args
[19855] user2(6687 MiB) python train.py --some --different --args
```
3. To auto set visible gpu in your python code, just use the following python code.
@ -74,46 +42,5 @@ def auto_set(num, allow_nonfree=True, ask=True, blacklist=[], show=True):
# some code here.
```
## Use this inside an docker.
For some reason, codes that running in docker cannot get the correct information about the process that using the gpu.
To support that, gpuutil supports read the output command of nvidia-smi and ps from an given file, which should be generated by you from host machine
To use this in docker, try the following steps:
1. figure out a way to pass the output of command ```nvidia-smi -q -x``` to the docker that your are currently using, save the output as a text file.
2. pass the output of a ps-like command to the docker. It is a table-like output, the first line is header, which should at least contains user, pid and command. below is an valid output generated by running ```ps -axo user,pid,command```on host machine:
```
USER PID COMMAND
root 1 /bin/bash -c bash /etc/init.docker; /usr/sbin/sshd -D
root 8 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root 9 sshd: user1 [priv]
user1 19 sshd: user1@pts/0
user1 20 -zsh
user1 97 tmux
user1 98 -zsh
```
if your generated output have different name, for example when you are using ```docker top``` instead of ```ps```, the ```COMMAND``` section would be ```CMD```, therefore you need prepare a dict that maps its name to either of ```user, pid, command```, note that its insensitive to upper case.
3. run the configuration script.
```shell
python -m gpuutil.set_redirect -nv path/to/your/nvidia/output -ps /path/to/your/ps/output -pst cmd=command,username=user
```
for more information about the script, run ```python -m gpuutil.set_redirect -h```, you will get:
```
usage: set_redirect.py [-h] [--nvsmi NVSMI] [--ps PS] [--ps_name_trans PS_NAME_TRANS]
optional arguments:
-h, --help show this help message and exit
--nvsmi NVSMI, -nv NVSMI
a file indicates real nvidia-smi -q -x output.
--ps PS, -ps PS a file indicates real ps-like output.
--ps_name_trans PS_NAME_TRANS, -pst PS_NAME_TRANS
a dict of name trans, format: name1=buildin,name2=buildin, buildin can be choosen from cmd,user,pid
```
> some advice:
> 1. you can use a script that run nvidia-smi and ps command and save their output to a directory, the mount the directory to the docker as readonly.
> 2. you could consider mount the directory as tmpfs.
## ps:
1. You can get more detailed gpu info via accessing gpuutil.GPUStat class, for more information, just look the code.
2. Since it use ps command to get detailed process info, it can only be used on linux, if you use it on windows, some information might be missing.
3. If you have any trouble, feel free to open an issue.
4. The code is straight forward, it's also a good choice to take an look at the code if you got any trouble.
you can get more detailed via accessing gpuutil.GPUStat class, for more information, just look the code.

View File

@ -1,147 +1,18 @@
import curses
from gpuutil import GPUStat, loaddict, savedict
from gpuutil import GPUStat
import sys
import argparse
import os
def csv2list(csv):
l = [col.strip() for col in csv.split(',')]
return [col for col in l if col != '']
def str2bool(s):
if s.lower() in ['t', 'yes', 'y', 'aye', 'positive', 'true']:
return True
else:
return False
def load_config():
home_dir = os.path.expanduser('~')
configpath = os.path.join(home_dir, '.gpuutil.conf')
if not os.path.isfile(configpath):
return {}
return loaddict(configpath)
def save_config(config):
home_dir = os.path.expanduser('~')
configdir = os.path.join(home_dir, '.gpuutil.conf')
savedict(configdir, config)
# style format: |c|l:15|r|c:14rl:13|
def parse_style(style):
if style is None:
return None, None
components = []
limits = []
while len(style) > 0:
ch = style[0]
if ch == '|':
components.append(ch)
style = style[1:]
continue
elif ch in ['l', 'r', 'c']:
limit = None
style = style[1:]
if style[0] == ':':
style = style[1:]
digits = ''
while style[0].isdigit():
digits += style[0]
style = style[1:]
if digits != '':
limit = int(digits)
components.append(ch)
limits.append(limit)
style = ''.join(components)
return style, limits
if __name__ == '__main__':
stat = GPUStat()
avaliable_cols = ['ID', 'Fan', 'Temp', 'TempMax', 'Pwr', 'PwrMax', 'Freq', 'FreqMax', 'Util', 'Vmem', 'UsedMem', 'TotalMem', 'FreeMem', 'Users']
recommended_cols = ['ID', 'Fan', 'Temp', 'Pwr', 'Freq', 'Util', 'Vmem']
parser = argparse.ArgumentParser()
parser.add_argument('--profile', '-p', default='default', type=str, help='profile keyword, corresponding configuration are saved in ~/.gpuutil.conf')
parser.add_argument('--cols', '-c', type=csv2list, help='colums to show.(Availabel cols: {0}'.format(avaliable_cols))
parser.add_argument('--style', '-sty', type=str, default=None, help='column style, format: |c|l:15|r|c:14rl:13|, c,l,r are align methods, | is line and :(int) are width limit.')
parser.add_argument('--show-process', '-sp', default=True, type=str2bool, help='whether show process or not')
parser.add_argument('--vertical', '-v', default=False, type=str2bool, help='whether show each user in different lines. (show user vertically)')
parser.add_argument('--save', default=False, action="store_true", help='save config to profile')
parser.add_argument('--watch', '-w', default=-1, type=float, help='save config to profile')
args = parser.parse_args()
cols = args.cols if args.cols is not None else recommended_cols
show_process = args.show_process
style, limit = parse_style(args.style)
vertical = args.vertical
unexpected_cols = []
for col in cols:
if col not in avaliable_cols:
unexpected_cols.append(col)
if len(unexpected_cols) > 0:
raise ValueError('Unexpected cols {0} occured. Cols must be chosen from {1}'.format(unexpected_cols, ','.join(avaliable_cols)))
if args.save:
params = {
"cols": cols,
"style": style,
"limit": limit,
"show-process": show_process,
"vertical": vertical
}
profile = args.profile if args.profile is not None else input('Please input your profile name:\n>>> ')
config = load_config()
config[profile] = params
save_config(config)
elif args.profile is not None:
config = load_config()
if 'default' not in config:
config['default'] = {
"cols": cols,
"style": style,
"limit": limit,
"show-process": show_process,
"vertical": vertical
}
if args.profile in config:
params = config[args.profile]
cols = params["cols"]
show_process = params["show-process"]
style = None
limit = None
vertical = False
if "style" in params:
style = params["style"]
if "limit" in params:
limit = params["limit"]
if "vertical" in params:
vertical = params["vertical"]
else:
raise ValueError('Profile do not exist.\nAvaliable Profiles:{0}'.format(','.join(list(config.keys()))))
info = stat.show(enabled_cols = cols, colsty=style, colsz=limit, vertical=vertical, show_command=show_process, tostdout=False)
if args.watch < 0:
print(info)
show_types = ['brief', 'detail']
default_type = 'brief'
show_type = default_type
if len(sys.argv) > 1:
show_type = str(sys.argv[1])
if show_type in show_types:
stat.show(disp_type=show_type)
else:
from curses import wrapper
import time
def continuous_watch(stdscr, info):
curses.curs_set(0)
stdscr.clear()
stdscr.nodelay(True)
lasttime = time.time()
try:
while True:
c = stdscr.getch()
if c in [ord('q'), ord('Q')]:
break
curses.flushinp()
hint = "Interval: {0} S | CurrentTime: {1}".format(args.watch, time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
stdscr.erase()
stdscr.addstr(0, 0, hint + '\n' + info)
stdscr.refresh()
passed_time = time.time() - lasttime
if passed_time < args.watch:
time.sleep(args.watch - passed_time)
lasttime = time.time()
info = stat.show(enabled_cols = cols, colsty=style, colsz=limit, vertical=vertical, show_command=show_process, tostdout=False)
except KeyboardInterrupt:
curses.flushinp()
pass
wrapper(continuous_watch, info)
print('The given type is \"{0}\" not understood, and it should be choosen from {1}\nUsing default type \"{2}\".'.format(show_type, show_types, default_type))
show_type = default_type
stat.show(disp_type=show_type)
# auto_set(1, ask=True, blacklist=[], show=True)

View File

@ -1,40 +1,9 @@
from io import StringIO
from sys import platform
import xml.etree.ElementTree as ET
import os
import json
import random
import sys
import csv
import platform
osname = platform.system()
def loadfile(path):
with open(path, 'r', encoding='utf-8') as f:
return f.read()
def savefile(path, content):
with open(path, 'w+', encoding='utf-8') as f:
return f.write(content)
def loaddict(path):
content = loadfile(path)
content = content.strip()
if len(content) != 0:
return json.loads(content)
else:
return {}
def savedict(path, dictionary):
content = json.dumps(dictionary, indent=4, ensure_ascii=False)
savefile(path, content)
def clean_split(line, delemeter=' '):
words = line.split(delemeter)
words = [w.strip() for w in words]
words = [w for w in words if w != '']
return words
def exe_cmd(command):
pipe = os.popen(command)
return pipe.read()
def xml2dict(node):
node_dict = {}
@ -50,8 +19,10 @@ def xml2dict(node):
node_dict[child.tag].append(xml2dict(child))
return node_dict
def parse_nvsmi_info(nvsmixml):
tree = ET.fromstring(nvsmixml)
def parse_nvsmi_info(command='nvidia-smi -q -x'):
pipe = os.popen(command)
xml = pipe.read()
tree = ET.fromstring(xml)
return xml2dict(tree)
def parse_gpu_info(stat):
@ -150,20 +121,19 @@ def short_gpu_info(stat, disp_type='brief'):
util=stat_disp['util'],
mem=stat_disp['mem']
)
if len(process_info) > 0:
info += ' '
info += process_info
return info
def get_basic_process_info_linux():
def get_basic_process_info():
pipe = os.popen('ps axo user:20,pid,args:1024')
output = pipe.read()
lines = output.split('\n')[1:]
processes = {}
for line in lines:
words = clean_split(line)
words = [p for p in line.split(' ') if p != '']
if len(words) < 3:
continue
username = words[0]
@ -175,142 +145,6 @@ def get_basic_process_info_linux():
}
return processes
def get_basic_process_info_windows():
pipe = os.popen("tasklist /FO CSV")
content = StringIO(pipe.read())
reader = csv.reader(content, delimiter=',', quotechar='"')
content = []
for row in reader:
content.append(list(row))
processes = {}
for line in content[1:]:
name, pid, _, _, _ = line
processes[pid] = {
"user": None,
"command": name
}
return processes
def get_basic_process_info_by_file(filepath, col_name_trans=None):
# suppose cmd is always at the last, and the previous lines have no space.
content = loadfile(filepath)
lines = content.split('\n')
header = clean_split(lines[0])
interested = {
'user': None,
'pid': None,
'command': None
}
if col_name_trans is None:
col_name_trans = {'cmd': 'command'}
for i, word in enumerate(header):
word = word.lower()
if word in col_name_trans:
word = col_name_trans[word]
if word in interested:
interested[word] = i
processes = {}
for line in lines[1:]:
words = clean_split(line)
pid = words[interested['pid']]
user = words[interested['user']]
cmd = ' '.join(words[interested['command']:])
processes[pid] = {
"user": user,
"command": cmd
}
return processes
def draw_table(table, rowsty=None, colsty=None, colsz = None):
def justify(s, align, width):
if align == 'c':
s = s.center(width)
elif align == 'r':
s = s.rjust(width)
elif align == 'l':
s = s.ljust(width)
return s
num_cols = len(table[0])
if rowsty is None:
rowsty = '|' + '|'.join(['c']*len(table)) + '|'
if colsty is None:
colsty = '|' + '|'.join(['c']*num_cols) + '|'
# check tables.
for row in table:
if len(row) != num_cols:
raise ValueError('different cols!')
col_width = [0] * num_cols
if colsz is None:
colsz = [None] * num_cols
# collect widths.
for row in table:
for i, col in enumerate(row):
col = str(col)
width = max([len(c) for c in col.split('\n')])
if colsz[i] is not None and colsz[i] < width:
width = colsz[i]
if width > col_width[i]:
col_width[i] = width
# prepare vline.
vline = []
colaligns = []
col_pos = 0
line_delemeter = '-'
content_delemeter = ' '
for ch in colsty:
if ch == '|':
vline.append('+')
elif ch in ['c', 'l', 'r']:
colaligns.append(ch)
vline.append('-' * col_width[col_pos])
col_pos += 1
vline = line_delemeter.join(vline)
table_to_draw = []
row_pos = 0
for ch in rowsty:
if ch == '|':
table_to_draw.append("vline")
elif ch in ['c', 'l', 'r']:
table_to_draw.append(table[row_pos])
row_pos += 1;
strings = []
for row in table_to_draw:
if type(row) is str:
strings.append(vline)
continue
new_row = []
max_cols = 1
for word, align, width in zip(row, colaligns, col_width):
cols = []
lines = word.split('\n')
for line in lines:
while len(line) > 0:
cols.append(line[:width])
line = line[width:]
cols = [justify(col, align, width) for col in cols]
if len(cols) > max_cols:
max_cols = len(cols)
new_row.append(cols)
for cols, width in zip(new_row, col_width):
empty = ' ' * width
while len(cols) < max_cols:
cols.append(empty)
rows = list(zip(*new_row))
for row in rows:
cols_to_drawn = []
col_pos = 0
for ch in colsty:
if ch == '|':
cols_to_drawn.append('|')
elif ch in ['c', 'r', 'l']:
cols_to_drawn.append(row[col_pos])
col_pos += 1
strings.append(content_delemeter.join(cols_to_drawn))
return '\n'.join(strings)
class GPUStat():
def __init__(self):
self.gpus = []
@ -321,35 +155,8 @@ class GPUStat():
self.cuda_version = ''
self.attached_gpus = ''
self.driver_version = ''
self.nvsmi_source = None
self.ps_source = None
self.ps_name_trans = None
self.load_configure()
def load_configure(self):
configuration_path = os.path.expanduser('~/.gpuutil.conf')
if os.path.isfile(configuration_path):
configuration = loaddict(configuration_path)
if 'redirect' in configuration:
if 'nvsmi_src' in configuration['redirect']:
self.nvsmi_source = configuration['redirect']['nvsmi_src']
if 'ps_src' in configuration['redirect']:
self.ps_source = configuration['redirect']['ps_src']
if 'ps_name_trans' in configuration['redirect']:
self.ps_name_trans = configuration['redirect']['ps_name_trans']
def get_process_info(self):
if self.ps_source is not None:
return get_basic_process_info_by_file(self.ps_source, self.ps_name_trans)
if osname == 'Windows':
return get_basic_process_info_windows()
elif osname == 'Linux':
return get_basic_process_info_linux()
def parse(self):
if self.nvsmi_source is None:
self.raw_info = parse_nvsmi_info(exe_cmd('nvidia-smi -q -x'))
else:
self.raw_info = parse_nvsmi_info(loadfile(self.nvsmi_source))
self.raw_info = parse_nvsmi_info('nvidia-smi -q -x')
self.detailed_info = {}
for key, value in self.raw_info.items():
if key != 'gpu':
@ -358,19 +165,16 @@ class GPUStat():
if type(value) is not list:
value = [value]
self.detailed_info[key] = [parse_gpu_info(info) for info in value]
self.process_info = self.get_process_info()
self.simplified_info = {}
for key in self.detailed_info:
if key != "gpu":
self.simplified_info[key] = self.detailed_info[key]
else:
self.simplified_info["gpus"] = [simplify_gpu_info(stat) for stat in self.detailed_info["gpu"]]
if "cuda_version" in self.simplified_info:
self.cuda_version = self.simplified_info["cuda_version"]
if "driver_version" in self.simplified_info:
self.driver_version = self.simplified_info["driver_version"]
if "attached_gpus" in self.simplified_info:
self.attached_gpus = self.simplified_info["attached_gpus"]
self.process_info = get_basic_process_info()
self.simplified_info = {
"driver_version": self.detailed_info["driver_version"],
"cuda_version": self.detailed_info["cuda_version"],
"attached_gpus": self.detailed_info["attached_gpus"],
"gpus": [simplify_gpu_info(stat) for stat in self.detailed_info["gpu"]]
}
self.cuda_version = self.simplified_info["cuda_version"]
self.driver_version = self.simplified_info["driver_version"]
self.attached_gpus = self.simplified_info["attached_gpus"]
self.gpus = []
for i, gpu in enumerate(self.simplified_info["gpus"]):
for process in gpu['processes']:
@ -378,105 +182,31 @@ class GPUStat():
gpu['id'] = i
self.gpus.append(gpu)
def show(self, enabled_cols = ['ID', 'Fan', 'Temp', 'Pwr', 'Freq', 'Util', 'Vmem', 'Users'], colsty=None, colsz=None, show_command=True, vertical=False, tostdout=True):
def show(self, disp_type='brief', command=True):
self.parse()
gpu_infos = []
# stats = {
# "id": stat['id'],
# "fan": stat['fan_speed'].split(' ')[0].strip(),
# "temp_cur": stat['temperature']['current'].split(' ')[0].strip(),
# "temp_max": stat['temperature']['max'].split(' ')[0].strip(),
# "power_cur": stat['power']['current'].split(' ')[0].strip(),
# "power_max": stat['power']['max'].split(' ')[0].strip(),
# "clock_cur": stat['clocks']['current'].split(' ')[0].strip(),
# "clock_max": stat['clocks']['max'].split(' ')[0].strip(),
# "util": stat['utilization'],
# "mem_used": stat['memory']['used'].split(' ')[0].strip(),
# "mem_total": stat['memory']['total'].split(' ')[0].strip(),
# "mem_free": stat['memory']['free'].split(' ')[0].strip()
# }
for gpu in self.gpus:
# process_fmt = '{user}({pid})'
# process_info = ','.join([process_fmt.format(
# user = proc['user'],
# pid = proc['pid']
# ) for proc in gpu['processes']])
process_fmt = '{user}({pids})'
users_process = {}
for proc in gpu['processes']:
user = proc['user']
pid = proc['pid']
if user not in users_process:
users_process[user] = []
users_process[user].append(pid)
delemeter = ','
if vertical:
delemeter = '\n'
process_info = delemeter.join(process_fmt.format(user=user, pids = '|'.join(users_process[user])) for user in users_process)
info_gpu = {
'ID': '{0}'.format(str(gpu['id'])),
'Fan': '{0} %'.format(gpu['fan_speed'].split(' ')[0].strip()),
'Temp': '{0} C'.format(gpu['temperature']['current'].split(' ')[0].strip()),
'TempMax': '{0} C'.format(gpu['temperature']['max'].split(' ')[0].strip()),
'Pwr': '{0} W'.format(gpu['power']['current'].split(' ')[0].strip()),
'PwrMax': '{0} W'.format(gpu['power']['max'].split(' ')[0].strip()),
'Freq': '{0} MHz'.format(gpu['clocks']['current'].split(' ')[0].strip()),
'FreqMax': '{0} MHz'.format(gpu['clocks']['max'].split(' ')[0].strip()),
'Util': '{0} %'.format(gpu['utilization'].split(' ')[0]),
'Vmem': '{0}/{1} MiB'.format(
gpu['memory']['used'].split(' ')[0].strip(),
gpu['memory']['total'].split(' ')[0].strip(),
),
'UsedMem': '{0} MiB'.format(gpu['memory']['used'].split(' ')[0].strip()),
'TotalMem': '{0} MiB'.format(gpu['memory']['total'].split(' ')[0].strip()),
'FreeMem': '{0} MiB'.format(gpu['memory']['free'].split(' ')[0].strip()),
'Users': process_info
}
gpu_infos.append(info_gpu)
align_methods = {key:'r' for key in gpu_infos[0]}
align_methods['Users'] = 'l'
if enabled_cols is None:
enabled_cols = list(align_methods.keys())
c_align = [align_methods[col] for col in enabled_cols]
info_table = [enabled_cols]
for info in gpu_infos:
this_row = [info[key] for key in enabled_cols]
info_table.append(this_row)
info = draw_table(info_table, rowsty='|c|{0}|'.format('c'*(len(info_table)-1)), colsty=colsty, colsz=colsz) + '\n'
if show_command:
lines = [short_gpu_info(info, disp_type=disp_type) for info in self.gpus]
print('================== GPU INFO ==================')
print('\n'.join(lines))
if command:
print('================ PROCESS INFO ================')
procs = {}
for gpu in self.gpus:
for proc in gpu['processes']:
pid = proc['pid']
proc['gpu'] = [str(gpu['id'])]
if type(proc['vmem']) is str:
try:
proc['vmem'] = int(proc['vmem'].split(' ')[0])
except:
proc['vmem'] = 0
if pid not in procs:
procs[pid] = proc
else:
procs[pid]['gpu'].append(str(gpu['id']))
procs[pid]['vmem'] += proc['vmem']
proc_fmt = '[{pid}|{gpus}] {user}({vmem} MiB) {cmd}'
proc_fmt = '[{pid}] {user}({vmem} MiB) {cmd}'
proc_strs = []
for pid in procs:
this_proc_str = proc_fmt.format(
user = procs[pid]['user'],
vmem = procs[pid]['vmem'],
pid = procs[pid]['pid'].rjust(5),
cmd = procs[pid]['command'],
gpus = ','.join(procs[pid]['gpu'])
vmem = procs[pid]['vmem'].split(' ')[0],
pid = procs[pid]['pid'],
cmd = procs[pid]['command']
)
proc_strs.append(this_proc_str)
proc_info = '\n'.join(proc_strs)
table_width = info.find('\n')
proc_info = draw_table([['Process Info'.center(table_width-4)], [proc_info]], rowsty="c|c|", colsty="|l|", colsz=[table_width-4])
info += proc_info
if tostdout:
print(info)
return info
print(proc_info)
class MoreGPUNeededError(Exception):
def __init__(self):
@ -554,6 +284,4 @@ def auto_set(num, allow_nonfree=True, ask=True, blacklist=[], show=True):
else:
raise MoreGPUNeededError
set_gpu(selected_gpu, show=show)
if __name__ == '__main__':
print(get_basic_process_info_windows())

View File

@ -1,42 +0,0 @@
import argparse
import os
from gpuutil import loaddict, savedict
availabel_name_trans = ['command', 'user', 'pid']
parser = argparse.ArgumentParser()
parser.add_argument('--nvsmi', '-nv', default=None, type=str, help='a file indicates real nvidia-smi -q -x output.')
parser.add_argument('--ps', '-ps', default=None, type=str, help='a file indicates real ps-like output.')
parser.add_argument('--ps_name_trans', '-pst', default=None, type=str, help='a dict of name trans, \
format: name1=buildin,name2=buildin, \
buildin can be choosen from {0}'.format(','.join(availabel_name_trans)))
args = parser.parse_args()
# lets chech the pst.
parsed_name_trans = {}
name_trans = args.ps_name_trans
if name_trans is not None:
name_trans = name_trans.split(',')
name_trans = [t.strip() for t in name_trans]
name_trans = [t for t in name_trans if t!='']
for item in name_trans:
item = item.split('=', maxsplit=1)
if len(item) != 2:
raise ValueError('there must be a = in nametrans')
key, value = item
if value not in availabel_name_trans:
raise ValueError('given buildin name {0} do not exist, avaliable: {1}'.format(value, ','.join(availabel_name_trans)))
parsed_name_trans[key] = value
config_file = os.path.expanduser('~/.gpuutil.conf')
configuration = {}
if os.path.isfile(config_file):
configuration = loaddict(config_file)
configuration['redirect'] = {
"nvsmi_src": args.nvsmi,
"ps_src": args.ps,
"ps_name_trans": parsed_name_trans
}
savedict(config_file, configuration)

View File

@ -2,15 +2,15 @@ from setuptools import setup, find_packages
setup(
name = 'gpuutil',
version = '0.0.5',
version = '0.0.1',
keywords='gpu utils',
description = 'A tool for observing gpu stat and auto set visible gpu in python code.',
license = 'MIT License',
url = 'https://git.zmy.pub/zmyme/gpuutil',
url = '',
author = 'zmy',
author_email = 'izmy@qq.com',
packages = find_packages(),
include_package_data = True,
platforms = 'All',
platforms = 'any',
install_requires = [],
)