sh provides a few methods for running commands and obtaining output in a non-blocking fashion.
You may also create asynchronous commands by iterating over them with the _iter special kwarg. This creates an iterable (specifically, a generator) that you can loop over:
from sh import tail # runs forever for line in tail("-f", "/var/log/some_log_file.log", _iter=True): print(line)
By default, _iter iterates over STDOUT, but you can change set this
specifically by passing either
"out" to _iter (instead of
True). Also by default, output is line-buffered, so the body of the loop
will only run when your process produces a newline. You can change this by
changing the buffer size of the command’s output with _out_bufsize.
If you need a fully non-blocking iterator, use _iter_noblock. If
the current iteration would block,
errno.EWOULDBLOCK will be
returned, otherwise you’ll receive a chunk of output, as normal.
By default, each running command blocks until completion. If you have a long-running command, you can put it in the background with the _bg=True special kwarg:
# blocks sleep(3) print("...3 seconds later") # doesn't block p = sleep(3, _bg=True) print("prints immediately!") p.wait() print("...and 3 seconds later")
You’ll notice that you need to call
RunningCommand.wait() in order to exit
after your command exits.
Commands launched in the background ignore
SIGHUP, meaning that when their
controlling process (the session leader, if there is a controlling terminal)
exits, they will not be signalled by the kernel. But because sh commands launch
their processes in their own sessions by default, meaning they are their own
session leaders, ignoring
SIGHUP will normally have no impact. So the only
SIGHUP will do anything is if you use _new_session=False, in which case the controlling process will probably be the shell
from which you launched python, and exiting that shell would normally send a
SIGHUP to all child processes.
For more information on the exact launch process, see Architecture Overview.
In combination with _bg=True, sh can use callbacks to process output incrementally by passing a callable function to _out and/or _err. This callable will be called for each line (or chunk) of data that your command outputs:
from sh import tail def process_output(line): print(line) p = tail("-f", "/var/log/some_log_file.log", _out=process_output, _bg=True) p.wait()
To control whether the callback receives a line or a chunk, use
_out_bufsize. To “quit” your callback, simply return
tells the command not to call your callback anymore.
True does not kill the process, it only keeps the callback
from being called again. See Interactive callbacks for how to kill a
process from a callback.
Commands may communicate with the underlying process interactively through a
specific callback signature
Each command launched through sh has an internal STDIN
that can be used from callbacks:
def interact(line, stdin): if line == "What... is the air-speed velocity of an unladen swallow?": stdin.put("What do you mean? An African or European swallow?") elif line == "Huh? I... I don't know that....AAAAGHHHHHH": cross_bridge() return True else: stdin.put("I don't know....AAGGHHHHH") return True p = sh.bridgekeeper(_out=interact, _bg=True) p.wait()
If you use a queue, you can signal the end of the input (EOF) with
You can also kill or terminate your process (or send any signal, really) from your callback by adding a third argument to receive the process object:
def process_output(line, stdin, process): print(line) if "ERROR" in line: process.kill() return True p = tail("-f", "/var/log/some_log_file.log", _out=process_output, _bg=True)
The above code will run, printing lines from
some_log_file.log until the
"ERROR" appears in a line, at which point the tail process will be
killed and the script will end.
A done callback called when the process exits, either normally (through a success or error exit code) or through a signal. It is always called.
Here’s an example of using _done to create a multiprocess pool, where
sh.your_parallel_command is executed concurrently at no more than 10 at a
import sh from threading import Semaphore pool = Semaphore(10) def done(cmd, success, exit_code): pool.release() def do_thing(arg): pool.acquire() return sh.your_parallel_command(arg, _bg=True, _done=done) procs =  for arg in range(100): procs.append(do_thing(arg)) # essentially a join [p.wait() for p in procs]