9

Imagine something like this:

$ curlsh http://www.example.org
> GET /foo/bar/bam
...output here...
> POST /thing/pool ...
... result here.... 

is there a tool that lets me do that?

Cheeso
  • 271

7 Answers7

8

On many Linux/Unix systems, your pseudocode will just work in any shell, although your paths should really be full URLs.

For instance, on Debian-based systems, the package libwww-perl installs three symlinks to lwp-request which are called /usr/bin/GET, /usr/bin/HEAD, and /usr/bin/POST. These do what you would expect. Recent versions of OpenSuse's perl-libwww-perl package omit the symlinks (which is probably a bug), so you would have to create them yourself or use lwp-request directly. Generally and for many years, it has been quite a safe assumption that GET, HEAD, and POST executables are available on unixoid systems.

Of course you could also use curl for all of these tasks, so perhaps I do not understand why you feel that a command line shell such as bash is not interactive.

8

Thanks for the answers.

After googling around, I found resty, which is a shell script wrapper around the curl tool. This is really what I want. It's 155 lines of shell script, and when I run it, I get functions for GET, PUT, POST, DELETE, and OPTIONS. These functions are just wrappers around the curl program found on my path.

It works like this on MacOSX bash:

$ . resty

$ resty https://api.example.org
https://api.myhost.com*

$ GET /v1/o/orgname -u myusername:password
{
  "createdAt" : 1347007133508,
  "createdBy" : "admin",
  "displayName" : "orgname",
  "environments" : [ "test", "prod" ],
  "lastModifiedAt" : 1347007133508,
  "lastModifiedBy" : "admin",
  "name" : "orgname",
  "properties" : {
    "propertyList" : [ ... ]
  },
}
$

The first line there just runs the commands in the current shell.

The next line, the "resty" command, sets the URL base. Thereafter, any call to GET, PUT, POST... implicitly references that base. I showed an example that emits prettified JSON. I think if your server emits minified JSON, you could pretty-print it with an external script by piping the output.

There's support for host-based preferences. Suppose your target host is api.example.org. Ceate a file called ~/.resty/api.example.org, and insert in there, lines which specify arguments that should be passed to every curl call to the host by that name. Each http verb gets its own line. So, inserting this content in the file:

GET -u myusername:mypassword --write-out "\nStatus = %{http_code}\n"

...means that every time I do a GET when api.example.org is the base hostname, the curl command will implicitly use the -u and --write-out args shown there. (-u for basic auth).

As another example, you could specify the Accept header in that file, so that you always request XML:

GET --header "Accept: application/xml"

Any curl command line arg is supported in that preferences file. All the curl args for the host+verb tuple need to go on a single line in the preferences file.

Handy.

Cheeso
  • 271
6

lftp:

$ lftp http://repo.xplico.org/pool/
cd ok, cwd=/pool
lftp repo.xplico.org:/pool> ls
drwxr-xr-x  --  /
drwxr-xr-x            -  2012-02-13 09:48  main
lftp repo.xplico.org:/pool> cd main
lftp repo.xplico.org:/pool/main> ls
drwxr-xr-x  --  ..
drwxr-xr-x            -  2012-02-13 09:48  x

Directory listings only work for websites that do send directory indexes. But even if they don't you can still use the get command to get individual files.

  • The get command will download the file cat will output the file to screen. To get an http post you can use something like: quote post post.php x=1&y=z. – donothingsuccessfully Oct 16 '12 at 17:13
5

You can use Netcat.

netcat is a simple unix utility which reads and writes data across net‐ work connections, using TCP or UDP protocol.

Here is an example to retrieve the VLC home page

nc www.videolan.org 80
GET http://www.videolan.org/vlc/ HTTP/1.0

HTTP/1.1 200 OK
Date: Tue, 16 Oct 2012 07:34:48 GMT
Server: Apache/2.2.16 (Debian)
Content-Location: index.html
[…]

The rest of the HTML is output to the console. Note: You need to type Return twice after HTTP/1.0.

Marco
  • 33,548
  • 4
    Dude, you're hardcore. Netcat to do HTTP? ouch! I was hoping for something with a few more bells and whistles. – Cheeso Oct 16 '12 at 16:01
  • 1
    I have no idea about what you're after. At least it does what you stated in you question, which was not very detailed. Netcat is a very handy tool, for web browsing maybe not that well suited. Depends on your needs. – Marco Oct 16 '12 at 16:05
  • No offense, I'm just saying, it would be nice if the tool were a little more intelligent. Like if there were a way to set the HTTP Headers it would send. Or if I didn't have to type "HTTP/1.1" for each request. or etc. – Cheeso Oct 16 '12 at 16:20
2

Yep, you can use the "--config" option:

Specify the filename to -K, --config as '-' to make curl read the file from stdin.

Example:

$ curl -K-
url https://github.com/blog
remote-name
<Ctrl + D>
Zombo
  • 1
  • 5
  • 44
  • 63
1

You can use interactive shells both with python or perl :

In Perl

$ perl -MWWW::Mechanize::Shell -eshell
(no url)> get http://cnn.com
Retrieving http://cnn.com(200)
http://edition.cnn.com/> title
CNN.com International - Breaking, World, Business, Sports, Entertainment and Video News
http://edition.cnn.com/> content
(...)

See perldoc WWW::Mechanize::Shell or http://search.cpan.org/~corion/WWW-Mechanize-Shell-0.52/lib/WWW/Mechanize/Shell.pm


In Python :

$ python -i -c 'import mechanize; br = mechanize.Browser(factory=mechanize.RobustFactory())'
>>> br.open("http://xkcd.com/")
<response_seek_wrapper at 0x2824a28 whose wrapped object = <closeable_response at 0x27c2710 whose fp = <socket._fileobject object at 0x27be3d0>>>
>>> br.title()
'xkcd: Identity'
>>> print br.response().read()
(...)

See http://wwwsearch.sourceforge.net/mechanize/

0

I quite like lynx for interactive browsing on the command line. It's more of a full blown browser (that fits into an ncurses application) than a raw HTML tool, though..

I've tested raw HTML commands over SSL before, for which I used openssl, but this only allows one command at a time..

> openssl s_client -quiet -connect google.com:443
GET /
... HTML response

> openssl s_client -quiet -connect myprivateserver.com:443
POST /thing/pool ...
... response

For more info on openssl's s_client options, man s_client contains the details.

Alex Leach
  • 7,910