There is something like that, a Unix shell. You can use the shell environment to string together small programs written in different languages.
I mostly work in nodejs, and instead of connecting my things through language level bindings, I just write simple wrappers to call a child process and handle stdout and stderr. It might not work for all situations, but it means I can build my bits and pieces out of whatever language I want.
I think the implicit desire of the parent is for the environment to handle data conversions for you.
Unix shell is an unstructured text environment. Data loses all semantic context the moment it's pushed to stdout, and thus every program that wants to interact with it needs to have its own, half-baked, buggy parser included. This is wasteful and a source of many maintenance and security problems.
In some lispmachine demo by .. I forgot his name (~~amir gupta?~~ kalman reti) he talks about that aspect. Unix serializes everything to strings, while lisp machines just pass pointers around. Obviously simpler and faster but maybe less easy to move across machines (I'm speculating). I've always been saddened by the neverending amount of grep/perl fu needed in unix to do the same thing over and over.
Similar feeling when using powershell, a lot of benefits of having a uniform set of interfaces for any kind of object. That said it feels a little straighjacketing compared to crunching strings.
Yeah. I love PowerShell conceptually, but at least in practice, it's cumbersome to exploit the object-based nature. Arguably this is only a UX problem. Say I want to kill all notepad.exe instances. I open PS, and do:
Get-Process -Name notepad
(Thank you autocomplete for reminding me about -Name parameter). Now how do I kill it? Lemme see what I'm getting:
Get-Process -Name notepad | Get-Member
Aha! It says the object has a `void Kill()` method. But how do I invoke it on each of those? If I do a lot of this, I'll probably remember[0]:
Awesome. With Tab-completion, not that cumbersome for common tasks, but it's annoying at discoverability. I have to do that Get-Member dance for each new command / type of object I'm working with, just to know what properties and methods are there for me to use. The UNIX equivalent would be something like[1]:
kill $(ps aux | grep '[n]otepad' | awk '{print $2}')
But the difference is, I can get arrive here by just knowing grep, awk, and seeing the output of ps. I don't have the extra step of having to inspect the particular type of objects being returned by a command. In UNIX, I have to write parsers for everything, but parser-writing tools like grep, sed and awk are generalizable across all problems, and with them, I can massage any output I see on screen into valid input for a command I want to use. Every complex command I do uses the same set of knowledge.
PowerShell could use exposing the internal objects more, to bridge this gap. That would require something more complex than linear terminals we're used to - something that would give user an IntelliSense or point-and-click quality. At the very least, it should start exposing API documentation (i.e. descriptions, not just function signatures) in Get-Member[2], but preferably it should have IntelliSense popups telling you about the type you can expect from a command (or that you just got), and what you can do with it. PowerShell ISE is 20% there. My dream would be something McCLIM-like - that I could point at any piece of human-readable output and jump straight to the property that it represents.
--
[0] - Yes, I know there's a simpler variant for this case, but I'm showing the general pattern for an arbitrary object collection.
[1] - Again, I know there's `killall'.
[2] - Hell, they should start shipping the documentation with default installation. As it turns out, Get-Help can't help you much until you let it download the documentation package on first use. Which was super-annoying when I had to do some PS work on a VM with no direct Internet access.
yes I think it boils down to having to know MS mental model of objects before being able to leverage it while unix and strings have no model, you're free to extract structure and data as you see fit (shooting your foot or not). Text is always kinda unbounded in possibilities but requires you to do a bit more work on your own.
No, but someone else does. Every language comes with its own collection[0] of half-baked, bug ridden libraries for parsing JSON and CSV. JSON and CSV are particularly poor data formats, in that JSON has an impedance mismatch if the language you're using isn't JavaScript[1], and CSV means whatever the tool generating it think it means - it doesn't have to be comma, separated, or just values.
Also, if you adopt either, you lose the benefit of human-readability, which is half of the justification for the UNIX tooling to emit plaintext in the first place. So one could as well bite the bullet and treat machine readability and human readability as separate concerns, and use formats suitable for each individually.
--
[0] - It's never just one library.
[1] - Do JSON objects get mapped to structs, arrays or hash tables in your language? How do you distinguish between true, false and null?
I mostly work in nodejs, and instead of connecting my things through language level bindings, I just write simple wrappers to call a child process and handle stdout and stderr. It might not work for all situations, but it means I can build my bits and pieces out of whatever language I want.