1

I am working on a server environment which has almost no executables in its /bin folder, except for some basic ones like ls and ln, but I want other common binaries (like sed, awk, gcc etc.) to be installed. Since there are obviously no package utilities like apt-get or something, I wanted to just wget the binaries I miss, so I wanted to ask whether it is reasonable to do so, and, if yes, where do I get the binaries from?

Thank you in advance!

nakajuice
  • 111
  • 1
    Rather than downloading individual utilities, I think you should install a package system. There should be embedded distributions that you can install easily, probably with opkg as the package manager. OpenWRT, maybe. – Gilles 'SO- stop being evil' Jan 28 '15 at 21:52

3 Answers3

3

No

If they are not there and you don`t have package management tools and not even a compiler (and presumably appropriate header files) your only choice is to install statically build binaries. There are numerous disadvantages to this approach, e.g. increased file size, having to replace all binaries even if just a library requires an update (such as a security fix).

Rather check for options to install a custom distro of your choice. If it is absolutely necessary though, check this question for info on creating static binaries.

  • statically built binaries often do wind up a little larger, but they also depend on a lot less. they also load faster and do not break when a library is updated. the question is about core utilities. is there an advantage to a dynamically linked link? i don't think so - its underlying C-language API has barely changed in 30 years. what possible benefit might there be in requiring dependencies for such things? – mikeserv Jan 13 '16 at 10:55
  • @mikeserv Personally i consider bug fixes reason enough. Note, he is talking about a server environment, not some unconnected embedded device. – Syren Baran Jan 13 '16 at 11:33
  • what kind of bugs? and dont you think theyre more likely introduced than otherwise with a shifting dependency base? i think that is realistically how it works in practice, anyway. – mikeserv Jan 13 '16 at 11:48
  • @mikeserv Having used various flavours of Linux and Unix for 20 years i don't think the "shifting dependency base" caused by package management is an issue. Feel free to build all your binaries statically linked if you want to, though. – Syren Baran Jan 13 '16 at 12:02
  • thats an interesting notion. i dont think dynamic linking is bad. but i do think it ought to be done with a purpose - its silly to dynamically link an echo or cat, though – mikeserv Jan 13 '16 at 12:49
  • @mikeserv If at all that just proves that one should use package management. On a side note, echo and cat are dynamically linked usually. And would you seriously recommend statically building e.g. gcc? – Syren Baran Jan 13 '16 at 13:35
  • tcc: yes. gcc: no. that would probably create a black hole or something anyway. and i know they are dynamically linked on many linux systems. that does not make it any less silly. what's the point of that? oh: and all of the filesystem confusion and so.[num] links and the rest are generally only relevant to package managers. they tend to cause a lot of the issues theyre supposed to solve. i use them, but they are very complicated by nature, and there are better ways. im just too lazy to set it up lately. – mikeserv Jan 13 '16 at 13:46
  • The point? Simple. Less hassle. No reason to recompile and repackage when the library changes. A noteworthy exception is busybox, it is statically linked, very useful if libs have been damaged or compromised. – Syren Baran Jan 13 '16 at 13:59
  • that's a circular argument. statically linked binaries dont need to be recompiled when a library changes - they're already statically linked. also, they continue to work if libs have been damaged or compromised because they're statically linked. thats exactly what i mean about package managers causing their own problems - dynamic linking cant possibly be considered a solution to the problem of dynamic linking. that simply doesnt make any sense at all. my point is that simple utilities are not improved by making them less simple. let them be simple! – mikeserv Jan 13 '16 at 16:35
2

You can add whatever you like, but whether you should is a matter of circumstance. Chiefly, I'd ask: why aren't they there in the first place?

Is this a home server box system running a vendor's custom Linux distro, generally meant to be managed via a web GUI and not used at the command line? In that case, there's probably no harm. (You may want to see if other users of the same device have made a repository of compiled programs, with an add-on package manager similar to apt-get.)

Is this a more-high-end appliance-type device? Are you going to make support calls difficult if you mess with the installed software? In that case, stay away.

Or, is this a workplace machine? In that case, there may be policy reasons which dictate a minimal environment. I'd check to be sure.

Overall, the theme is: answer the question "why aren't they there?", and then you can answer whether or not it's a good idea for yourself.

mattdm
  • 40,245
  • Thank you for your reply! I suppose it is case one in your list. There is web server that runs on this machine, and it was supposed to be sufficient for the end user. But now as I need to deploy ruby application on this server, I have to install all lacking utilities, I guess. Another problem is that there is trouble installing those core utils since they need themselves (sed, expr, POSIX shell) to get installed, which is really confusing. – nakajuice Jan 28 '15 at 19:54
  • As I already mentioned, the variety of utilities is very limited, so even finding out kernel version is hard task (since things like uname -a and cat /proc/version don't yield anything. And as I mentioned, installing core utils turns out to be problematic since they need themselves. Is there any repo that would contain compiled binaries instead of sources? – nakajuice Jan 28 '15 at 20:06
  • 1
    @haemhweg are logging into a chroot? Are you root on the server? – jordanm Jan 28 '15 at 20:11
  • Nope, I am not. – nakajuice Jan 28 '15 at 20:16
  • @haemhweg: Not chrooted, not root, or neither? – Runium Jan 28 '15 at 20:22
  • Sorry if I misunderstood your question, there is a root user in the system, but I am logged in as another user. And I am logging into root, i.e. /, since there are bin, etc, usr and other directories present. I have no permission to write into /bin, but I can write into /sbin which would make no difference since /sbin is contained in $PATH – nakajuice Jan 28 '15 at 20:27
  • 1
    Small update here: I am chrooted indeed. – nakajuice Jan 28 '15 at 22:26
2

The simplest and less intrusive way to get a large number of utilities would be to find a busybox binary suitable for your OS and install it.

That is a single file but it provides several hundreds of commands.

jlliagre
  • 61,204
  • I was already surprised nobody mentioned busybox however do keep in mind he says he has a limited number of utilities probably because he is already in a busybox in the first place...it may happen he is in a chroot/jail/docker and is not aware of it. We also have to keep in mind the OP is from 1 year ago. – Rui F Ribeiro Jan 13 '16 at 09:43
  • hmmmm there is a comment he is chrooted as a normal user down the thread... – Rui F Ribeiro Jan 13 '16 at 09:45
  • @RuiFRibeiro Thanks, I overlook the OP was that old. In any case, if busybox is already installed, getting most of the missing utilities (like the mentioned sed and awk but not gcc) might then be done just by creating symlinks. The fact he is a regular user in a chrooted environment wouldn't prevent busybox to work. – jlliagre Jan 13 '16 at 09:53