Is there any way for a child process to set an environment variable visible to its parent?
The answer is both yes and no. Let's get the "no" out of the way first: no matter what you do, there is no way for a child process to affect the value of a variable in its parent. That said, it is possible for a parent process and a child process to cooperate and share whatever you need them to share.
So, if you have a script that says MYVAR="foo" and it calls another script that sets MYVAR="bar", $MYVAR remains "foo" in the parent script. The same idea is true if you change directories in the called script: when it exits, you won't be in the directory it changed to.
There are always ways to get what you need. One of the simplest is to NOT call the other script, but rather to execute it in-line, in the context of the current script. In most Unixy shells, you can use what's called "dot syntax". That is, instead of doing:
#!/bin/bash MYVAR="foo" otherscript echo $MYVAR
you do this:
#!/bin/bash MYVAR="foo" . otherscript echo $MYVAR
This way (with the ". otherscript") there won't be another process created. You can think of it like an "include". If "otherscript" changes MYVAR, that last echo will show the change.
A couple of other interesting things: "otherscript" does not need to be executable and if called as shown above, it would have to be in the current directory.
The C shells, /bin/csh, tcsh and others use the ``source'' command:
But I hope you are not using any csh or variant!
You can use "eval" if "otherscript" does things like:
#!/bin/bash echo "MYVAR=$PATH"
In that case, your "parent" script (it's not really a parent anymore) does this:
#!/bin/bash MYVAR="foo" eval `./otherscript` echo $MYVAR
If the "variable" can be handled with an integer, the exit status of the child is available to the parent. That's the usual way of learning why something terminated. So, if your child process wants to communicate "128" to its parent, you can do this:
#!/bin/bash # "otherscript" #whatever stuff it does exit 128 #!/bin/bash # Parent MYVAR="foo" ./otherscript MYVAR=$? echo $MYVAR
If not (more data needs to be passed), a shared named pipe is a little better than a file (and easier to handle concurrency with) , shared memory is the fastest (but also more overhead to setup). You can also just setup a bidirectional pipe between two programs; this can even be across a network if necessary.
*Exactly* what happens with pipes depends on the shell. But the differences are very minor- a job control shell vs. a non job control shell will assign process groups differently. Under ordinary circumstances that is so unimportant to you that I probably shouldn't even mention it.
Pipes block when full. How much data you can write is (for example) examined for bash with "ulimit -p" , which is the size of the pipe in 512 byte blocks and the bash man page plainly tells you that it is a read-only value (you can't make it larger).
However, a named pipe provides communication between two otherwise unrelated proceesses. One doesn't have to call the other, or even know its name, and the one side of the pipe doesn't even have to be running when the other side runs. As a practical example, I often run across stupid programs that think they have to print to a device and don't know how to use the spooler. That's fine until we want to make the printer into a network printer. The solution is to make a named pipe and tell the dumb program that the pipe is its print device. You start another process that does this:
while : do exec </dev/myfakenetprint lpr -P myrealnetprinter done
Then when the program writes to the pipe /dev/myfakenetprint, it actually goes to the spooler.
Got something to add? Send me email.
More Articles by Anthony Lawrence © 2013-07-25 Anthony Lawrence