ccurtsinger
๐ Joined in 2014
๐ผ 28 Karma
โ๏ธ 18 posts
Load more
(Replying to PARENT post)
(Replying to PARENT post)
Now that I'm in MA, I have heard several people say that concrete highways do not last through the winter, despite the large number of concrete-type highways in good condition in MN. Maybe this is the result of careful local road surface research? Roads are certainly better in MN than they are in MA, even though MN has colder and snowier winters. That could just be anti-highway spending sentiment from the big dig though...
(Replying to PARENT post)
Minor correction for the title: "SweeTango" has just one 'T' (http://en.wikipedia.org/wiki/SweeTango)
(Replying to PARENT post)
(Replying to PARENT post)
1. Laziness: Not required. See Lisp, Scheme, ML, OCaml, etc.
2. Functions as first-class citizens: This is probably the only hard and fast requirement.
3. No Side Effects: Not required. Again, see Scheme, ML, and Rust.
4. Static Linking: Certainly not, but the author seems to mean static binding, which is more important. However, a functional language doesn't actually need any binding aside from function invocation (see Lambda Calculus). `Let` bindings are generally available and very useful.
5. No explicit flow control: Function invocation is flow control. Loops aren't functional, but some functional languages have them.
6. No commands/procedures: If the author means "no top-level function definitions" that is clearly not true. Some functional languages even have macro languages.
This article gives the (incorrect) impression that functional programming is about a set of restrictions you must follow 100% of the time. Functional programming is a style that can be used in any language, as long as you can pass functions around as values. It wasn't pretty, but Java <=1.7 could still support a functional programming style by using `Callable` objects.
The `map` and `reduce` operations are certainly possible in imperative languages. Python has them built-in, they can be written in C++, and so on.
(Replying to PARENT post)
Alignment: different systems have different minimum alignment requirements. Most require all allocations are 8 or 16 byte aligned.
Buffer overruns: using object headers is risky, since a buffer overrun can corrupt your heap metadata. You'll either need to validate heap metadata before trusting it (e.g. keep a checksum) or store it elsewhere. This also wastes quite a bit of space for small objects.
Size segregation: this isn't absolutely essential, but most real allocators serve allocations for each size class from a different block. This is nice for locality (similarly-sized objects are more likely to be the same type, accessed together, etc). You can also use per-page or per-block bitmaps to track which objects are free/allocated. This eliminates the need for per-object headers.
Internal frees: many programs will free a pointer that is internal to an allocated object. This is especially likely with C++, because of how classes using inheritance are represented.
Double/invalid frees: you'll need a way of detecting double/invalid frees, or these will quickly lead to heap corruption. Aborting on the first invalid free isn't a great idea, since the way you insert your custom allocator can cause the dynamic linker to allocate from its own private heap, then free these objects using your custom allocator.
Thread safety: at the very least, you need to lock the heap when satisfying an allocation. If you want good performance, you need to allocate objects to separate threads from separate cache lines, or you'll end up with false sharing. Thread segregated heaps also reduce contention, but you need a way of dealing with cross-thread frees (thread A allocates p, passes it to B, which frees it).
The HeapLayers library is very useful for building portable custom allocators: https://github.com/emeryberger/Heap-Layers. The library includes easily-reusable components (like freelists, size classes, bitmaps, etc.) for building stable, fast allocators. HeapLayers is used to implement the Hoard memory allocator, a high performance allocator optimized for parallel programs.
(Replying to PARENT post)
(Replying to PARENT post)
The same reasoning applies to reads. If sensitive objects are distributed throughout the sparse heap, the probability of hitting a specific sensitive object is the same as the probability of overwriting the vtable in the above attack. The probability of reading out any sensitive object depends on the number of sensitive objects and the sparsity of the heap.
There are also guard pages sprinkled throughout the sparse heap. Section 6.3.1 shows the minimum probability of a one byte overflow (read or write) hitting a guard page. This probability increases with larger objects and larger overflows. You can also increase sparsity to increase this probability, at a performance cost.
(Replying to PARENT post)
Randomized allocation makes it nearly impossible to forge pointers, locate sensitive data in the heap, and makes reuse unpredictable.
This is strictly more powerful than ASLR, which does nothing prevent Heartbleed. Moving the base of the heap doesn't change the relative addresses of heap objects with a deterministic allocator. A randomized allocator does change these offsets, which makes it nearly impossible to exploit a heap buffer overrun (and quite a few other heap errors).
(Replying to PARENT post)
Implied volatility is computed base on the difference between an option's selling price and an algorithmically-determined price. That's exactly the sort of information any trader would rely on to place bets. When the calculated value is higher than the price, there is profit to be made. Any decrease in trading latency will allow traders to buy up instruments selling below their estimated value a bit quicker, driving these numbers closer together.
(Replying to PARENT post)
And since this is a discussion about a book, it's worth pointing out that the book is focused around an alternative proposal for market design.
(Replying to PARENT post)
Still, I think most large systems where latency is a major concern have some level of concurrency and/or queueing that can be manipulated to reduce latency.
(Replying to PARENT post)