I don't think that huge pages are worth the trouble on a typical desktop use. Data in huge pages is very slightly faster to access, but this requires allocating memory in chunks of 2MB at a time (on x86_64, with similar sizes on other architectures). Most applications allocate memory in far smaller chunks.
The two main applications of huge pages in user applications are number crunching programs that allocate huge arrays of numbers, and database software. With most applications, it's rarely worth the trouble writing code that looks up the architecture characteristics (to find out the size and availability of huge pages) and ensures that data structures are allocated in the proper chunk size and alignment.
Linux attempts to allocate huge pages automatically, but that again rarely happens on a typical desktop use, because memory is rarely allocated in sufficiently large chunks.