The new "Threads" app by Meta (Facebook) is just the old 4-E strategy strategy to destroy Mastodon:
(what they are doing now) launch a competing but compatible service with that of Mastodon. The vast majority of users, most of whom don't care about the privacy and intimacy of the Mastodon network, will go with the brand with the most name recognition. The number of users already signed up for Threads shows this to be true.
make their service appear to be better with features like search, which they have the resources to do, but the rest of the Mastodon network does not. Also include features for tracking and advertising, sell this as a good thing, "a better place to grow your personal brand, your business." When people think about joining either Facebook Threads or some other Mastodon instance, which will they choose? "Oh, Threads users can also talk with Mastodon users so they are basically the same? Well, why not just use Threads then?" The one with the most name recognition will always win.
Then comes the blogs and YouTube videos about, "I tried Threads, Bluesky, Mastodon, Pixelfed, each for 1 month, here is what I learned" type videos in which the author decides Threads or Bluesky is best because they have better features and you don't have to decide which instance to join.
after attracting a critical mass of users large enough to decimate the user base of the competing Mastodon network, and temporarily making appear to have better features like search, quietly remove compatibility with the Mastodon network.
This might effect only 10% of Mastodon users because the other 90% will be on Threads. Then people will think, "who cares if we lose contact with that tiny minority of old Mastodon users, they should have just joined Threads by now anyways, they still can. It has search, and more people voted for it with their patronage. And you don't have to think about what instance to join, its easier!"
At this point, people begin to wonder what the point of Mastodon even is.
without any real competition to keep people from leaving for an alternative, start exploiting users for more and more content for ad revenue, while also exploiting advertisers with ever-increasing costs of ad revenue, while also cutting costs on the quality of their service until it becomes unusable. But at this point it is too late for Mastodon, the momentum it once had is now long gone and no longer a threat to the Meta corporation. Their investment paid off.
Meta is one of the worlds largest corporations that has made most of its money not just through advertising but from gathering and selling people's personal information. They are scared to death about losing control over the Internet that they had gained over the past 15 years or so, and they are fighting to take that control back for themselves.
We built this, but now a corporation like Meta/Facebook feels they have the right to exploit it for all its riches until it is destroyed. Don't let it happen. Join the Fediblock cause, it is the only way to protect our home-grown community from corporate take-over.
]]>what type of problems do you solve using Lisp?. The post got to be a bit too long, and I am re-publishing it here as a proper blog post. I am also including some of a post I wrote on Mastodon which touched on some of these same issues.
So to answer the question: I have known about Common Lisp and Scheme for years, but only recently started using them. This is the story of the 3 Lisp dialects that I use.
I use Emacs and Emacs Lisp to manage my tens of thousands of text files, I write Emacs Lisp scripts to automate simple tasks like searching for pieces of information, formatting it, and outputting it to a report that I might publish on my blog or send in an e-mail. I also use Emacs to help with data cleaning before running machine learning processes. Emacs helps with navigating CSV and JSON files, it also is a really good batch file renamer.
I have recently started using Guile Scheme to do some personal projects. I went with Guile over the myriad other Scheme dialects because it is the implementation used for the Guix package manager and operating system.
Also, there the Goblins, which is a distributed object-capability programming system is officially supported on the Guile platform, and I have been really wanting to write applications using this programming style ever since I first learned about it.
Also, there is the G-Golf foreign interface layer allows Guile to automatically use an C library that implements the GObject Introspection interface. So through Guile, like with Python, you can use any C code library used to create of all native apps in the Gnome, MATE, Cinnamon, or (my personal favorite) the Xfce desktop environments. This potentially makes Guile a viable alternative to Python scripting across all of those Linux desktop environments.
Of all the Lisp dialects, Scheme is my favorite, for a few reasons:
It is absolutely tiny. Guile is relatively large (not as big as
Common Lisp), but other implementations are unbelievably small. for
example the Chez Scheme petite
interpreter is fully compliant
with the R5RS standard, and the executable is like 308 kilobytes on a
64-bit Linux computer system.
Hygienic macros
with syntax-case
Recursive
functions over using
the loop
macro of Common Lisp. When writing algorithms, I personally find
it easier to reason about recursive functions than loops. Scheme also
provides me the ease-of-mind that comes with knowing the optimizing
Scheme compiler will ensure recursive loops will never overflow the
stack.
Pattern matching is well supported by most Scheme implementation.
It is a "Lisp-1" system, meaning there is only one namespaces for variables and functions, as opposed to Common Lisp (a "Lisp-2 system") which allows a name to be either a variable, a function, or both. I personally find it easier to reason about higher-order functions in Lisp-1 systems.
Support for Delimited Continuations, which is a fairly recent discovery of computer language theory (first being discussed back in the 1990s), but is available across a few Scheme implementations.
That said, I am also starting experimenting with Embedded Common Lisp (ECL) because it is a lightweight standards compliant Common Lisp implementation that compile your program into C++ code, and this is useful to my professional work.
The modern software industry, especially in the realm of big data and machine learning, has mostly settled on a pattern of using C++ for creating performance critical libraries, and creating Python binding to the C++ libraries for scripting. I was hoping languages like Haskell and/or Rust might come along and change all this, but it will take decades (if ever) for the software industry to turn in that direction.
The problem with Python, in my experience (and I believe many other software engineers would agree) is that it does not scale well to larger applications at all, whereas Common Lisp does. This is for various reasons, but mostly due to how Lisp does strong dynamic typing, and also the CLOS implementation of the meta-object protocol. Yet too many companies waste time writing large applications in Python — applications that are much larger than the scripting use cases that Python was originally intended to be used. I believe this is time and money better spent on other things.
So I see Common Lisp, and the ECL compiler, as a potentially viable alternative to the sub-optimal status quo of Python as a scripting layer around C++ code libraries, at least perhaps for my day job, if not being more generally true industry-wide. Mostly, ECL would allow me to write a program in Common Lisp instead of Python, but deliver to my clients the C++ code that ECL generates to be used in their machine learning projects. (I have not actually done this yet, I am still investigating whether this would be a viable solution to any of my projects).
ECL makes it easy to use C++ libraries through Lisp instead of Python. And there are so many good C++ libraries out there: Qt, OpenCV, Tensorflow, PyTorch, OpenSceneGraph, FreeCAD, Godot game engine, Blender. And it compiles easily on Linux/Unix (GCC), Windows (MSVC), and MacOS (via Clang++), so good for cross-platform development.
So in spite of Lisp being such an old family of languages (its earliest incarnations dating all the way back to 1958), and being superseded in popularity and widespread use by languages like Python and JavaScript across the software industry, Lisp is still a modern, relevant, evolving, and very useful family of programming languages. At the same time, a Lisp such as Scheme or Common Lisp would even be a better choice of programming language in many applications where Python is currently used.
I just hope I eventually find the time to try out all of these Common Lisp and Scheme related ideas I have. I especially hope ECL turns out to be a profitable technological choice for the professional work that I do. But only time will tell.
]]>Ctrl-Alt-F[1-9]
" keys.
The advantage of this approach is that you do not need to install remote desktop services on all of your servers. Most Linux and Unix systems have SSH with X11 forwarding pre-installed. Most Linux/Unix systems have Xorg pre-installed or is easily installed from their default software repositories. It works, is secure, and uses software that is likely already installed.
I assume you have installed a graphical desktop environment
such as Gnome, KDE, or Xfce (DE) on the remote host (e.g. a server)
computer. In this article, I will use Xfce and the
"xfce4-session
" command to launch the DE on the remote
host.
I assume you can login to the remote host with the "ssh
-X
" command and run GUI apps like Firefox on the remote host
but have their windows displayed on your local host (e.g. your
laptop).
I assume your local host (laptop) has an X11 server such as
Xorg installed onto it, and that it is setup to switch between virtual
terminals (VTs) using "Ctrl-Alt-F1
" through
"Ctrl-Alt-F7
", and that you might already be running a DE
on one of these virtual terminals (usually the 7th one).
I assume you already have your SSH public keys setup for remote
login on the remote host (server) without a password
using "ssh-copy-id
", and that on the local host (laptop)
you can run a SSH key server like "ssh-agent
" and load it
with your SSH private keys using "ssh-add
" on a VT.
I assume you have already somehow enabled the SSH option
"ForwardX11 yes
" in the SSH configuration for your remote
login, and that you can run a command like "ssh -X remote-host
firefox
" and have the remote host (server) run Firefox but
display its window on your local host (laptop).
On your local host (laptop), switch to any virtual terminal
(VT) not currently being used by an X11 or Wayland server, for
example, you may switch to "/dev/tt1
" by pressing
"Ctrl-Alt-F1
". Now login with your usual user name and
password.
On this VT, prepare your shell for no-password remote
login. Run "eval $(ssh-agent)
" to setup the SSH password
service for this login, and "ssh-add
" to unlock your
master password.
On this VT, launch an Xorg server with "xinit
" but
tell it to run the DE command, in this case
"xfce4-session
", on the remote host (server) using SSH
with X11 forwarding enabled:
xinit /usr/bin/ssh -vX remote-host xfc4-session -- :15 vt1
The symbol ":15
" is an arbitrary choice by me, it
tells the new Xorg server instance to identify itself as
":15
". Any integer will do, but it is probably best to
avoid 0 through 7.
The symbol "vt1
" tells the new Xorg server
instance to commandeer the virtual terminal attached to
"/dev/tty1
", which is the VT we are currently
using. Make sure you specify the correct virtual
terminal.
Of course, "remote-host
" should be replaced with
the address of the remote host (server) into which you want to login,
and the "xfce4-session
" command should be the command to
launch whatever DE is available on the remote host.
Be aware that the Xinit command looks for full paths like
"/usr/bin/ssh
" instead of just "ssh
" to
differentiate between commands and arguments.
Streaming audio on the remote host over to the local host is beyond the scope of this article, but can be accomplished with audio servers like PulseAudio or PipeWire.
]]>
Artificial intelligence, more technically
called machine
learning,
and even more
technically, artificial
neural networks, are becoming more and more amazing in their
capabilities. They first started doing some truly disturbing things
a few years ago with
"deep
fakes," creating images that look absolutely real of people that
never existed by sort-of averaging
the images of millions of
other people. They can even
create animated images,
swapping your face with that of
a celebrity or
politician, allowing you
to make a video of you doing or saying things as that celebrity or
politician that they never actually did or said. Then came
the deep
fakes for peoples voices, so with enough samples of a persons
voice, you could translate your voice into the voice of a celebrity
or politician.
People immediately saw the potential for abuse, but their fears were placated when it was shown that you could train other neural networks to detect deep fakes. Of course, I think what ordinary people don't yet grasp is that Generative Adversarial Networks (GANs) are created by putting two neural networks in competition with each other and letting them learn how to best each other. So it is possible to train an AI that learns how to beat the deep fake detectors, and then you have to re-train the deep fake detectors to beat the deep fake AIs. However it seems to me that for the time being, people are not quite as concerned about deep fakes anymore.
But then the GPT
algorithm
was developed. The various GPT networks, suffixed
by a number (GPT-2, GPT-3, GPT-4), each successive network being
bigger and more capable than the previous, have got everyone's
attention now. The reason is that these neural networks actually
seem to understand
things. You can ask it questions,
and it gives you a reasonable answer. It passes
the Turing
Test
with flying colors (an informal, unscientific test that
simply asks, "can this machine trick people into thinking they are
talking with a human?"). It can probably play and win the game of
Jeopardy, or at least, come close to
what the
IBM Watson supercomputer achieved 10 years ago but with a
fraction of the computing power (and electricity) that Watson
required. It
can probably
correctly diagnose some diseases by reading a person's
description of their symptoms. It
can pass
the Bar Exam, and probably offer sort-of reasonable
legal advice for very simple cases. It
can write
simple computer programs for you by just you explaining what you
want your program to do. (See
this video demo.)
Machines like this do
not think
like us, they merely mimic our
speaking behavior. They clearly have no sense of "truth" or
"falsehood". If it tells you something blatantly false, it
isn't even lying to you, it just is saying something
(anything) in a way that mimcs what an expert would
actually say in response to your question.
One of the positive uses that I think these
new language
transformers
like ChatGPT could be
good for is to make it much easier to find specific knowledge on an
Internet that serves us a deluge of content that is difficult to
sift through for the things we actually want or need to see.
Recently I was curious if I could find a technological solution to a problem that I was fairly certain probably already existed. Before asking anyone I knew, or searching the web, it occurred to me to maybe see if I could use ChatGPT to find a solution to my problem.
It failed, miserably. And since reading through GPT chat logs is among the most monotonous things I ever have the displeasure of reading nowadays, let me summarize and paraphrase the entire conversation I had with this machine. The actual chat log is posted below if anyone actually likes reading the output of ChatGTP or if anyone doesn't believe what I am saying actually happened.
]]>Stable Diffusion. This is the algorithm behind apps like Dall-E, which can interpret a sentence in the English language (or any human language) and generate a mostly realistic looking image of what you described. Some of the most interesting images generated by these AIs are called "style transfer," where you ask it to create an image from (let's say) James Cameron's
Terminatorfilm in the style of the early 20th century directory Fritz Lang, and you'll see what the T-800 looks like as a steampunk robot fighting against a Kyle Reese character who looks like a German film star of the 1920s film Metropolis.
Before long, it will likely be possible to ask a streaming service like Netflix to generate a any TV show you desire, on demand, making you the showrunner. All of it, from the storyline, the setting, the background, the world the characters inhabit, the dialog, the music, the cinematography, everything will be generated by an artificial neural network. It is just a matter of time before we there is enough computing power to make this possible.
Even without any insider knowledge, I guarantee that as I write these words, pornographic websites like RedTube, PornHub, and XVideos are dumping their profits into training neural networks to generate video any pornographic scenario one could imagine. An effective neural network that could do this would be worth more than a mountain of gold.
I also guarantee that all the real-life people who are featured
in the videos that are used to train these neural networks will
never see a single penny of those profits. They are entitled to be
paid for the work they do, but they will not be, and there is
nothing in the law that would defend their rights agains this sort
of abuse of the images of their bodies, since the end result is not
an image of their body, but the image of their body
statistically averaged
(so to speak) with images of thousands
of other people.
Sorry to end on such a solemn note, I want to emphasize that now is the time we really need to begin discussing policy that defends the rights of humans against use of their image or personal data to train artificial intelligence by tech companies and sell that data back to people at unfathomable profit margins. It is alredy happening today. Google is recordnig your Internet browsing history and using it to train neural networks that decide which ads to show you. Unless there is policy to stop them from abusing your data in yet-uniamginable ways, they will abuse your data.
]]>Cryonicsis pseudoscience. That doesn't stop people from trying.
I remember once I put a banana in the freezer. I forgot about it for an hour, when I took it out, it was all black. Every single cell in that banana that had retained water had exploded as the water froze and expanded. The entire volume of the fruit was in a state of being completely bruised. Apparently people who believe they can freeze their bodies and be re-animated in the future have never had this little life experience of forgetting a banana in the freezer. Or maybe they think the rate at which you can freeze and then thaw their body can prevent catastrphic damage at the cellular level to their brain and all the rest of their body (it doesn't).
The lesson to be learned here: a sufficiently unscrupulous entrepreneur can probably make a fortune off of people's desire to control the exact timing of when the expire from this mortal coil.
So I propose an idea for a web app (or maybe as a product proposal to anyone at Meta/Facebook or Alphabet/Google willing to hire me as a consultant), we can make a billion overnight by letting people upload their brains to a neural network like ChatGPT. Just have them collect everything they have written in their entire lives, especially their e-mail correspondence, their Facebook posts, their Twitter Posts, any blog posts they have written, scans of their hand-written diaries, everything they can find and digitize, and feed it into a neural network. The neural network then, you could argue, would contain a large portion of everything a person has ever experienced, learned, or thought about, and it could serve as their avatar in their absence after they have passed away.
If these massive tech corporations aren't going to take that idea seriously, maybe they can hire me as a PR guru. I would try to make the idea of stealing people's private data — the entire history of everything they have ever written online — less horrifying, maybe even more desirable. I would do this by describing these online services that sell targetted ads as a means of letting people live forever as a chat bot in the digital realm. If Facebook and Google are not creating ad campaigns for this right now, they ought to be.
You could even make the neural network the executor of your estate, entrusting it to carry out your will, making all the same decisions that you would do if you were alive. You can guarante through legal action carried out according to this artificial executor that if any of your kids or grandkids ever marry outside of their race, marry the same sex, or have gender reassignment surgery, they will suffer appropriately. It is the conservative bully's dream come true! It certainly seems to me that some of the people who care so much about extending their life to such unnatural extents are probably as unnaturally preoccupied with forcing their will upon their unfortunate family members in their absence. And with the current Supreme Court of the US, it would be easy enough to create law that allowed such AIs the legal authority to act as an executor of the estate of the deceased.
Note: I hope it is clear that I was being ironic in that last bit. I wish no suffering on anyone regardless of race, gender, or marital status.
]]>Software Disenchantment, on the Internet today about how disenchanged he has become with modern software engineering, and I have had similar thoughts after a similar number of years as a software engineer. It is nice to know I am not the only one among my peers who feels this way. When I was a kid growing up with 8-bit, 16-bit, and then 32-bit computers, the future seemed bright. Some technologies I expected (like video phones, photorealistic 3D graphics) would happen, some technologies I could never have imagined in my wildest dreams, for better (massive, world-wide collaborative projects like Wikpedia) or for worse (deep fake images of people's faces, voices).
But working as a software engineer is soul-crushing work nowadays because every project I work on is so superfluous while also doing nothing to help solve the world's most pressing problems. It is mostly just trying to squeeze out a bit more performance from applications that should probably not even exist to begin with, especially for companies selling a "service" to customers as a pretext for exploiting data about people's private lives to sell ads.
But then, there is lots to love about modern computers as well. So it got me thinking: what if we could get rid of the bad and keep the good of modern computers? What would I keep, what would I banish?
]]>every program does one thing, and does it well.Does Emacs, with it's multiple built-in Email clients, IRC client, web browser, terminal emulators, software development tools, and games (like Tetris), even fit this definition? It depends on how you define Emacs, that is, whether you think of it as a text editor with too many extensions, or whether you think of Emacs as a general purpose Lisp programming language with hundreds of useful libraries and applications.
Although this introductory article alone might be more than enough exposition to convince you of my thesis, here is a summary of the entire series of articles:
extensions, it has apps
I argue that Emacs is an app platform, not a text editor. Many of
us have heard the old joke that Emacs is an operating system that
lacks a good editor.
I have heard many people complain
that Emacs has extensions
that allow it to do
everything, including web browsing and e-mail, and that this
violates the UNIX philosophy of every program does one thing
well
becuase Emacs is one program that tries to do absolutely
everything. This is a misconception. In actual fact, Emacs is a
programming language and runtime. So it does do just one
thing well: it runs interactive Lisp programs. And
Emacs extensions
are actually applications
written in Lisp.
UNIX Philosophy
So suppose we agree that Emacs is a Lisp app platform. Maybe this is just a technicality. Does Emacs really fulfill the spirit of the UNIX Philosophy? When people actually use Emacs, are they practicing the ethos of UNIX?
In this article, I try to settle on a clearer definition of The UNIX Philosophy. As a starting point, the Wikipedia article on this topic has a pretty comprehensive listing of the various experts who have all spoken on the UNIX Philosophy throughout history: Brian Kernighan, Doug McIlroy, Robert Pike, Peter Salus, Eric Raymond, and others. Actually reading the work of these experts, it is hard to find a single comprehensive definition, but they all seem to agree on the following points:
Programs are minimalist tools/components used to construct
solutions to problems. Each program should do one thing, and do
it well.
Programs are composable transformations over data, and the output of a program should be in a human readable and machine readable textual format.
Programs should help users come up with their own ways to automate tasks, and should make it easy to experiment with task automation and software development in an interactive environment. Implied is the use of a Read-Eval-Print Loop (REPL) where you can quickly experiment with running a program and seeing results immediately.
This is really the crux of my argument, that the UNIX
Philosophy is really just a misguided, or perhaps incomplete
formulation of the principles
of functional
programming (FP). If you consider that a UNIX program
is conceptually equivalent to a FP function
, the idea
that every program does one thing, and does it well
,
is another way of expressing the principle that FP functions should
be pure, orthogonal, general, and elegant. Most of the the
principles of the UNIX Philosophy, at least by my above definition
of it, are either in agreement with FP, or are a natural consequence
of FP.
In day-to-day usage, the Bourne family of programming languages,
usually Bash, is used to compose programs together into pipelines,
such as with the pipe
operator. Though Bash
does allow for some functional programming techniques, it is not a
proper FP language, it only provides a few of the features of FP. To
the extent that the UNIX Philosophy is really about functional
programming, Lisp languages like Emacs fulfill the promise of the
UNIX philosophy perhaps even better than UNIX does. An interactive
Lisp programming environment such as Emacs provides a proper FP
language for general, day-to-day use on UNIX systems.|
There was not much, if any, collaboration between the UNIX people and the Lisp people. Unlike Lisp, UNIX was developed without the mathematics of Lambda Calculus as a guide, so I would argue that the UNIX Bourne shell scripting language is really just a naive implementation of a FP language.
I present an overview of the history of Lisp and the history of UNIX. Though both Lisp and UNIX have roots in the MIT CSAIL laboratory, I conclude that both were developed independently; Lisp preceded UNIX, UNIX did not borrow any ideas from Lisp.
Both Lisp and UNIX emerged with the invention of time sharing systems. The technological advancements of time sharing systems, and the command line interface (CLI) were both novel technologies at this time, and efficient use of the CLI is what underlies both the UNIX Philosophy, and the early Lisp programming techniques that were developed around the Read-Eval-Print Loop (REPL). Emacs was among the first Lisp implementations on UNIX.
Lisp was the first functional programming (FP) language. It was developed at MIT in 1958 on the IBM 704 mainframe computer, was based on the Lambda Calculus of mathematician Alonzo Church, and so it had a more rigorous mathematical foundation than UNIX.
There are criticisms that I commonly hear when discussing with peers why Emacs actually fulfills the tenets of the UNIX Philosophy. I to address these criticisms in this article.
(continued...)
]]>extensions,it has apps
There is one argument that I have got very tired of hearing, which goes something like this:
If the UNIX Philosophy is that
every prorogram does one thing, and does it well,then Emacs is the opposite of that. It is a text editing program that tries to do everything. It has extensions for being an e-mail client and web client, for playing music, for managing Git branches, even for playing Tetris. Who needs a text editor with that much bloat? Emacs may as well be an operating system, not an editor.
There are two problems with this argument:
Emacs is not a text editor, it is an interactive Lisp runtime environment, i.e. it is a programming language. Emacs really does do just one thing, and does it well: it runs Lisp programs.
Most of those extensions,
except maybe Tetris, are
not bloat, they are Lisp packages, or apps
if you
will, that run in the Emacs runtime. (Well, some people actually
kind of need Tetris.) Most other programming languages
you install will come with a selection of useful packages along
with it — maybe even Tetris if perhaps it is designed
for programming games. And like
Python's PIP, Perl's CPAN, Ruby's Gem, Node's
NPM, Rust's Cargo, Haskell's Cabal, and Common
Lisp's QuickLisp, Emacs does have it's
own package
manager and package repository as well, called
ELPA.
(continued...)
]]>In part 1
of this series I dispell the myth that Emacs is a bloated
text editor that does everything. I declare that Emacs is really a
Lisp interpreer, and therefore it does just one thing and does it
well: it runs Lisp programs, so in this way, Emacs does fulfill the
UNIX Philosophy.
But is this really a satisfactory argument? It might sound like Emacs technically follows the UNIX Philosophy, but does not really follow the spirit of the philosophy. So in the rest of this series, I want to really examine how the UNIX Philosophy is defined, what some of the key principles of UNIX are, and whether using Emacs allows you to really follow the principles of the UNIX Philosophy.
Several notable software engineers, most of whom actually worked on the UNIX project at Bell Labs, have spoken on the subject:
Doug McIlroy in the foreward to original manual for the UNIX Time-Sharing System in 1978.
Brian Kernighan and Rob Pike's indispensible text book, The UNIX Programming Environment in 1984.
Peter H. Salus who wrote A Quarter Century of UNIX in 1994.
and Eric S. Raymond's texbook The Art of UNIX Programming in 2003.
(continued...)
]]>In the part 2 of this series, tried to define more precisely what the UNIX Philosophy really is, and I quoted many of the original authors of UNIX and the UNIX Philosophy.
In this article, we get to the crux of my argument: that the UNIX
Philosophy is really all about functional programming (FP),
or really, that the UNIX Philosophy is an incomplete or misguided
formulation of the principles of FP. If you really want to follow
the principle of using programs as tools that do one thing and do
it well,
using an FP environment like Emacs, which provides a
convenient interface for executing Lisp functions, is generally the
better solution.
Doug McIlroy, at least in my book, deserves the credit for pipes. He thought like a mathematician and I think he had this connection right from the start. I think of the Unix command line as a prototypical functional language.
— Alfred Aho (co-author of the AWK language), stated in an interview, published in Masterminds of Programming: Conversations with the Creators of Major Programming Languages, 2009.
Since the UNIX Philosophy is so conerned with the design and use
of programs, let's consider what a program
really
is. When people talk about programs in the context of the UNIX
Philosophy, they usually mean some fundamental unit of executable
code in the operating system that can perform some
transformation on data, and that can be composed into shell
pipelines. There is also an emphasis on making the tools easy to use
in an interactive programming environment, so as to best take
advantage of the interactive nature of the command line.
programis a function
In a UNIX Programming Environment, the fundamental unit of code
that is a program
is conceptually equivalent to
a function
in a FP language. The definition of
a program
need not be restricted to code that runs in
it's own process, and that would be a somewhat meaningless
constraint to apply. The most important property of these units of
code — these functions — is that they be easy to reason
about (do just one thing), be somehow composable, and be
easy to use in an interactive environment, such as in a
REPL.
(continued...)
]]>In the part 3 of
this series I talked about how the UNIX Philosophy, the idea
that every program does one thing, and does it well
have
several defining principles in common with functional programming
(FP). In particular, functions in FP should also do one thing and
do it well
, so programs
and functions
are
conceptually the same. But to do FP, you must be able to compose
functions together using higher-order functions.
I will take for granted the fact that Lisp is commonly
acknowledge to be a FP language, so I will not explain why Lisp is a
FP language. This article is more about how UNIX shell programming,
the Bourne family of programming languages, are not FP
languages, and there are only a few built-in language constructs for
composing programs, like the pipe operator
(
). This proves to be quite the limitation when
it comes to composing programs. When composing programs or functions
together to solve larger problems, a Lisp programming language has
an advantage over a Bourne-style programming language. Refer to
the Eshell
programming language, which combines a Bourne shell-like language
with Emacs Lisp, as just on example of how Lisp can improve on the
UNIX Programming Environment.|
My goal here is not to evangelize Emacs. I readily admit Emacs is
far from perfect, and that there are certainly better
implementations of FP (Common Lisp, Scheme, and I especially love
Haskell). Rather my arguments is that there are better ways of
designing programs/functions in a way that they do on thing and
do it well,
without designing your programs/functions for use in
a UNIX shell. It just so happens that Emacs is one of the oldest
Lisp implementations still in wide-spread use that was originally
implemented for UNIX OS and it's clones. Decades of evolution
have made Emacs one of the most practical means of integrating
Lisp-style FP within
the UNIX
Programming Environment.
I also think I should mention that when Emacs is used as a shell in place of Bash, you have access to a wider range of FP tools. You still have access to all of the shell programs on your computer, but you also have access to Lisp functions that compose in other ways. You can, for example, capture the output of a command in a text buffer, then start applying both Lisp functions and UNIX shell programs as filters to the text buffer, seeing the result of each transformation as it happens. This allows you to experiment with function composition interactively, but without necessarily being restricted to the use of the REPL.
(continued...)
]]>In part 4 of this series I talk about how the Bourne shell languge falls short of being a FP language. In this article I consider the question of whether UNIX may have been inspired by Lisp, as both were developed in part at the MIT CSAIL laboratory. But as we shall see, the UNIX engineers had left CSAIL just before John McCarthy, inventor of Lisp, would set up shop there. I conclude that UNIX takes no inspiration from Lisp, but both UNIX and Lisp share some features in common, especially REPL-based software development techniques in a Command Line Interface (CLI), because both were invented around the time when the CLI was invented.
The history of Lisp and UNIX both originate around the time that computer companies like IBM and DEC started to develop time sharing systems in the early 1960s, because it was this innovation that naturally led to interactive command line interfaces (CLIs) and the Read-Eval-Print Loop (REPL). It was around this time that people would first begin philosophizing about the best ways to use computers in interactive CLIs.
Several computing platforms would eventually build interactive CLIs, and the idea would be wide spread well into the personal computing era of the 1970s - 1980s, until they would go out-of-vogue with the rise of GUIs. Interestingly, some of the very first CLI systems, Lisp, and UNIX, remain useful even half a century later, surviving beyond most other CLI systems of the era which have now dissapeared. Why this happened might be because of the separately enduring philosophies that guided the design of systems of Lisp and UNIX.
(continued...)
]]>I still find that people seem resistant to the idea that Emacs is
actually a good example of the UNIX Philosophy in practice.
I even have difficulty convincing people that the UNIX Philosophy is
in any way similar to the practical application of FP. Maybe
this is the result of the
editor holy
wars
phenomenon, and that I am a known Emacs
zealot. Over the years, I have heard various objections to my
thesis:
If you don't like Bash, UNIX lets you use whatever shell you want. — Sure, and you can use Emacs as as shell.
UNIX lets you run a program written in any language. — Emacs can execute OS processes just as well as a shell language can.
Without the UNIX Philosophy, there are no composable programs. This is denying the antecedant, a logical falacy.
So is Microsoft
Windows UNIX
now? Whether MS Windows follows the
UNIX philosophy is not a counter argument to anything I have
said.
(continued...)
]]>what exactly does Hyperbole do?The simplest possible explanation is that Hyperbole is a personalized information management system customizable with the full power of Emacs. But this explanation is usually less than satisfying, possibly because Emacs already has a number of built-in major and minor modes that can help to manage your personal information, so it begs the questions: how is Hyperbole different, what useful features does it add? I'll attempt to answer that question.
Hyperbole augments the already very powerful textual user interface (TUI) that is built-in to Emacs with the most useful aspects of a graphical user interface (GUI) in a very non-invasive way. The rest of this article explains the Hyperbole user experience, and recommends a simple but useful workflow that you can use to get started with it.
Let's start with a microscopic view, and then zoom out. The
fundamental mechanism of Hyperbole is the hyperlink, that
is, push buttons that perform actions. Of course, hyperlinks exist
everywhere in Emacs, but Hyperbole works a bit differently: it
provides a mechanism called implicit links
or implicit buttons
, which are triggered with a
universal action key binding {M-RET}
that behaves like the left mouse click in a
GUI. So an implicit button is really any arbitrary text that is
recognized by a cascade of context sensitive Hyperbole rules that
are executed on-demand when the universal action key binding is
pressed. Therefore, neither Emacs
text
properties, nor a markup language are necessary to create
actionable text with Hyperbole.
(continued...)
]]>shell-mode
, an
Emacs wrapper around a your OS's shell REPL, nor is it
about term-mode
, the Emacs terminal emulator. It is
about how Emacs is a shell in and of itself, and how it can replace
a CLI shell.
]]>
With an ordinary CLI using Bash
Same thing, but using Emacs
What is a distro? It is an online web service that distributes Linux OS, and Linux apps, to you the end user. A distro has a default look, but do not pick a distro based on how it looks, pick one that has provides best software and the best service. Install your distro, then install any look-and-feel that strikes your fancy.
What distro should I choose? For beginners, just pick one of Fedora, Ubuntu, Pop!_OS, or Mint.
I'd like to learn more about Linux, what should I do?
I'm a Linux noob, how can I become a power user?
Using the command line looks like a useful skill to learn, where can I start?
Desktop Environment (DE) is. There I talk about what I consider to be the top-4 DEs in Linux, but I also list 5th option:
Nothing, i.e. make up your own from
stock parts. In this article, I go over what those
stock partsare. ]]>