12. High-level languages and OS
Linux:
- Kernel — C
- Core system libraries — C
- Core system utilities — mostly C and shell
- Can be C++, Python and Perl
- Scripting and integrating
Mostly shell, but Perl was popular and Python is now
- Sometimes C++
Almost any OS distribution — all kinds of programming languages
⇒ Is C the only system programming languages?
Windows:
- Core, kernel and libraries: C++ (seldom C)
- Scripting and integrating:
- Any .Net-based platform (mostly C#)
- C++
- Powershell (was: cmd)
- Applications: C++, .Net, almost any 3d-party languages
MacOS:
- Kernel — C
Core system and libraries — Objective C and C, seldom C++
Now: Swift
Scripting and integration: AppleScript and shell
- Applications: Swift, Objective C, maybe C++, almost any 3d-party languages
Python and OS programming
- Cross-platformness
+/-
- ⇒ own implementation of OS features
POSIX-oriented (=> Linux)
+/-
⇒ own implementation of non-POSIX OS features
- High-level, but not OS-oriented (unlike shell)
+/-
- ⇒ own implementation of OS features
- Has a lot of non-privileged syscalls wrappers
Incredible amount of modules at PyPi
- Including system-oriented
- Including service-oriented
- Non «resource scrimp» style
- If resources do not multiply, use it as a whole, not piece by peice
E. g. .read() instead of .readline()
no needs to count CPU circles
- …
- If resources do not multiply, use it as a whole, not piece by peice
Modlues: os ans sys
Cross-platform path: os.path and pathlib
.is*(), .exists() etc.
os:
.environ
syscalls wrappers (.fork, .getpid, .fstat, .popen, .wait, almost any! …)
sys: .executable, .argv, .stdin, .stdout, .stderr, …
Also:
- …
Subprocess
Concept: cross-platform process execution with communication and exit status control.
Just run and get a result: run()
capture_output/input=; stdin=/stdout=/stderr=
- …
High-level popen analog: Popen()
1 from subprocess import * 2 # Run first process that outputs to the unnamed pipe 3 p1 = Popen(["cal -s"], stdout=PIPE) 4 # Run second process that inputs from the other end of the pipe opened 5 # and outputs to the second pipe 6 p2 = Popen(["hexdump", "-C"], stdin=p1.stdout, stdout=PIPE) 7 # Allow p1 to receive a SIGPIPE if p2 exits 8 p1.stdout.close() 9 # Read from the second pipe (stdout, stderr), 10 # but stderr will be empty because no redirection is used 11 res = p2.communicate() 12 # Note data is bytes, not str 13 print(res[0].decode())
do not use os.system(), it's platform-dependent and unsafe
Multiprocessing
About multithread programming in Python:
It exists
It almost non-parallel because of Global_interpreter_lock
There's no apparent way to eliminate GIL without significant slowing single-threaded code
You can use multithread if:
- Only one thread eats CPU, but other ones perform I/O
- You have complex code design based on threads and significant amount of permanent usage of joint resources
Unlike C programs, Python ones have no actual difference between threaded and multiprocess design (non «resource scrimp» style again)
The multiprocessing module.
Concept:
- Crossplatformness
Linux: fork() is used
Child process is running a function (unlike classic fork(), which defines two equal processes)
- That simplifies data transfer down to the arguments/return in simple cases
Processes can communicate through special socket-like high-level objects or object queue
Processes can use high-level shared memory-alike objects or object manager (the last is slower, but can work over network!)
Processes can be orchestered into a pool running exactly N processs in parallel.
Exact N child processes (called workers) are started
- Each worker can execute a function given multiple times as pool flows
- No other start/stop actions is performed
- Workers are stopped when pool is empty
See examples on lab classes page