Skip to main content

Posts

Showing posts from March, 2015

Faster morton codes with compiler intrinsics

Today I learned that newer Intel processors have an instruction which is tailor-made for generating morton codes: the PDEP instruction. There's an instruction for the inverse as well, PEXT . These exist in 32- and 64-bit versions and you can use them directly from C or C++ code via compiler intrinsics: _pdep_u32/u64 and _pext_u32/u64 . Miraculously, both the Visual C++ and GCC versions of the intrinsics have the same names. You'll need an Intel Haswell processor or newer to be able to take advantage of them though. Docs for the instructions: Intel's docs GCC docs Visual C++ docs This page has a great write up of older techniques for generating morton codes: Jeroen Baert's blog ...but the real gold is hidden at the bottom of that page in a comment from Julien Bilalte, which is what clued me in to the existence of these instructions. Update: there's some useful info on Wikipedia about these intructions too.

Awesome tools for Windows users

I moved back to Windows on my home computer a few months back. There are a few amazing free tools I've found since then that have been making my life better and I thought they deserved a shout-out. They are: SumatraPDF A fantastic PDF reader. Does everything I want and nothing I don't. RapidEE A sane way to edit environment variables. The simple joy of just being able to resize the window is... incredible. 7-Zip The best tool for dealing with compressed files on windows, bar none. GPU-Z A really handy way to see details about your GPU(s). XnView An amazingly good image viewer, which can also do bulk file format conversions. If you haven't already got these... go get them!

Whole program lexical analysis

I was thinking about parsing and lexical analysis of source code recently (after all who doesn't... right??). Everywhere I've looked - which admittedly isn't in very many places - parsers still seem to treat input as a stream of tokens. The stream abstraction made sense in an era where memory was more limited. Does it still make sense now, when 8 Gb of RAM is considered small? What if, instead, we cache the entire token array for each file? So we mmap the file, lex it in place and store the tokens in an in-memory array. Does this make parsing any easier? Does it lead to any speed savings? Or does it just use an infeasible amount of memory? Time for some back-of-the napkin analysis. Let's say we're using data structures kind of like this to represent tokens: enum TokenType { /* an entry for each distinct type of token */ }; struct Token { TokenType type; // The type of token int start; // Byte offset for the start of the token int end;