17

I sometimes run into software that is not offered in .deb or .rpm but only as an executable.
For example Visual Studio Code, WebStorm or Kerbal Space Programm.

For this question, I will take Visual Studio Code as the point of reference.

The software is offered as a zipped package.
When unzipping, I'm left with a folder called VSCode-linux-x64 that contains a executable named Code.
I can double click Code or point to it with my terminal like /home/user/Downloads/VSCode-linux-x64/Code to execute it.
However, I would like to know if there is a proper way to install this applications.

What I want to achieve is:

  • one place where I can put all the applications/softwares that are offered in this manner (executables)
  • terminal support (that means for example: I can write vscode from any folder in my terminal and it will automatically execute Visual Studio Code.

Additional info:

  • Desktop Environment: Gnome3
  • OS: Debian

EDIT:
I decided to give @kba the answer because his approach works better with my backup solution and besides that. Having script executing the binaries gives you the possibility to add arguments.
But to be fair, @John WH Smith approach is just as good as @kba's.

Harrys Kavan
  • 1,431

6 Answers6

18

To call a program by its name, shells search the directories in the $PATH environment variable. In Debian, the default $PATH for your user should include /home/YOUR-USER-NAME/bin (i.e. ~/bin).

First make sure the directory ~/bin exists or create it if it does not:

mkdir -p ~/bin

You can symlink binaries to that directory to make it available to the shell:

mkdir -p ~/bin
ln -s /home/user/Downloads/VSCode-linux-x64/Code ~/bin/vscode

That will allow you to run vscode on the command line or from a command launcher.

Note: You can also copy binaries to the $PATH directories but that can cause problems if they depend on relative paths.

In general, though, it's always preferable to properly install software using the means provided by the OS (apt-get, deb packages) or the build tools of a software project. This will ensure that dependent paths (like start scripts, man pages, configurations etc.) are set up correctly.

Update: Also reflecting Thomas Dickey's comments and Faheem Mitha's answer what I usually do for software that comes as a tarball with a top-level binary and expects to be run from there:

Put it in a sane location (in order of standards-compliance /opt, /usr/local or a folder in your home directory, e.g. ~/build) and create an executable script wrapper in a $PATH location (e.g. /usr/local/bin or ~/bin) that changes to that location and executes the binary:

#/bin/sh
cd "$HOME/build/directory"
exec ./top-level-binary "$@"

Since this emulates changing to that directory and executing the binary manually, it makes it easier to debug problems like non-existing relative paths.

kba
  • 823
  • 4
  • 13
  • 1
    I like this approach. Personally I'd just throw an alias into the bash profile, though it'd get messy fast if you had a lot of programs you did this with. – WorBlux Feb 22 '16 at 17:36
  • 1
    Then it can only be used from the shell. At some point you may want to install a .desktop entry to start from a menu or you add configuration, discover command line flags etc. An alias is very inflexible. – kba Feb 22 '16 at 18:21
10

According to TLDP, /opt might be a good place for this kind of software. I've used it myself to store some printer-related tools, and the "dynamic" version of Skype (as kba said, "terminal support" can then be achieved by setting the PATH variable accordingly).

More generally, I tend to use /opt to "install" proprietary software packaged as an executable, but that's probably just me. Besides, I tend to simply avoid this kind of software, since I usually have no certainty as to what it's going to do once I run it.

Another reason why I chose /opt is because it is usually meant for third-party, independent code, which does not rely on any file outside of its /opt/'package' directory (and other opt directories such as /etc/opt).

Under no circumstances are other package files to exist outside the /opt, /var/opt, and /etc/opt hierarchies except for those package files that must reside in specific locations within the filesystem tree in order to function properly. [...] Generally, all data required to support a package on a system must be present within /opt/'package', including files intended to be copied into /etc/opt/'package' and /var/opt/'package' as well as reserved directories in /opt.

One advantage of releasing source code is that people get to configure the compilation process, providing custom library/headers paths based on their system's specifics. When a developer decides to release code as an executable, that advantage is lost. IMHO, at this point, the developer is no longer allowed to assume that his/her program's dependencies will be available (which is why everything should be packaged alongside the executable).

Any package to be installed here must locate its static files (ie. extra fonts, clipart, database files) in a separate /opt/'package' or /opt/'provider' directory tree (similar to the way in which Windows will install new software to its own directory tree C:\Windows\Progam Files\"Program Name"), where 'package' is a name that describes the software package and 'provider' is the provider's LANANA registered name.

For more information, I would also suggest reading this other U&L question, which deals with the differences betwen /opt and /usr/local. I would personally avoid /usr/local in this case, especially if I'm not the one who built the program I'm installing.

John WH Smith
  • 15,880
6

It is entirely possible, and in fact quite easy, to create a distribution binary package from a binary zip archive or tarball, as in your example of Visual Studio Code.

Yes, Linux distribution binary packages like debs and rpms are customarily generated from source, but they don't have to be. And it is often (though not always) possible to arrange things to that the resulting distribution binary package installs things in the "right" places to conform to distribution policy.

In the case of a random proprietary tarball, if there was a way to properly install the software, e.g. an install target in a makefile, then that could be used with the distribution packaging machinery. Otherwise, this might involve "manually" mapping files to the "right" places, which could be a lot of work. While creating such a package might seem a weird thing to do, it would still have one of the major benefits of package management, namely clean installs and uninstalls. And of course such a package would never be accepted into any Linux distribution worth the name, but that's not your question.

Faheem Mitha
  • 35,108
3

I have rarely seen software that is delivered just as a binary executable and nothing else, and I would frankly be a bit suspicious of it. If nothing else I would at least expect a README (with instructions for installing it) and a LICENSE to accompany it. That being said...

The usual place where locally installed binaries not managed by the distro's package manager are kept is /usr/local/bin. You can put it there, and since that directory is (or should be) already in your $PATH you can run the software by typing its name at the command line.

Usually the software should also have a manpage (undocumented software is bad, right?) which goes in /usr/local/man and might have some support files such as translations into other languages and plugins which might go in /usr/local/share or /usr/local/lib, and so on. For this reason, software that isn't delivered as a package such as .deb or .rpm usually comes with an installer that puts everything in the right places. When you're installing from source, that's usually make install.

Celada
  • 44,132
  • In the case of Visual Studio Code, it has 77 LICENSE files scattered through the directory tree. The top-level Code is just the starting point. It might display the license when running (the 64-bit executable does not run on the machine at hand, but someone should verify that sort of thing to provide a good answer addressing OP's actual question. – Thomas Dickey Feb 20 '16 at 14:35
  • Thanks @ThomasDickey for the clarification. I believe I misunderstood the OP's exact situation. I thought that the only thing they received was a single ELF executable (wrapped in a tarball) – Celada Feb 20 '16 at 14:43
  • No - I took a quick look (not on my to-do list...), and it's got ~1500 files. Just taking a look with OSX, that ran, and starts in a tutorial on a web browser. @kba's answer is partly useful, though as a rule, I'd try unzipping it under /usr/local rather than my home directory. (Not all programs would work -- take Eclipse for example). – Thomas Dickey Feb 20 '16 at 14:47
1

ln -s </path/to/executable> /usr/local/bin/<program-name> should do the job. The standard place for local software is /opt, so I would recommend moving the software folder/file there before installing it with the command above.

1

Just an addition what kba said:

#! /bin/sh

If you want your file manager show your file as an shell extension (sh) rather than text plain document, you should include "! "

If you open any file inside ./bin you probably will see the same header