ccurtsinger

๐Ÿ“… Joined in 2014

๐Ÿ”ผ 28 Karma

โœ๏ธ 18 posts

๐ŸŒ€
15 latest posts

Load

(Replying to PARENT post)

Resubmitting after original was flagged for no reason.
๐Ÿ‘คccurtsinger๐Ÿ•‘3mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0
๐Ÿ‘คccurtsinger๐Ÿ•‘3y๐Ÿ”ผ4๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Not really. Calling a C++ template-generated function is no different than calling a C function. If you have a lot of tiny template-generated functions then sure, it would be slow. But no slower than a C program with a bunch of tiny functions.
๐Ÿ‘คccurtsinger๐Ÿ•‘9y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Minnesota does a significant amount of research on road surface materials. There is a section of I94 northwest of the Twin Cities that has three segments of highway, and traffic can be diverted onto one of the segments to test new road surfaces. Here's the MN DOT site with some test videos: http://www.dot.state.mn.us/mnroad/testcells/mainline.html (thrilling stuff).

Now that I'm in MA, I have heard several people say that concrete highways do not last through the winter, despite the large number of concrete-type highways in good condition in MN. Maybe this is the result of careful local road surface research? Roads are certainly better in MN than they are in MA, even though MN has colder and snowier winters. That could just be anti-highway spending sentiment from the big dig though...

๐Ÿ‘คccurtsinger๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I grew up in MN and always loved Honeycrisp, but after relocating I was surprised to hear people describe them as tasteless or mealy. It turns out they vary a lot depending on growing region. Cold hardy apples produce very different fruit in warmer climates. I have heard this is a big part of why the SweeTango brand has been limited to a small number of growers in select regions.

Minor correction for the title: "SweeTango" has just one 'T' (http://en.wikipedia.org/wiki/SweeTango)

๐Ÿ‘คccurtsinger๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I just closed my account. Unfortunately, this requires sending an email to support@uber.com. It's inconvenient, but gives you a place to explain that you will not use the service until they comply with the ADA.
๐Ÿ‘คccurtsinger๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The "six distinct features of a functional language" are misleading/inaccurate:

1. Laziness: Not required. See Lisp, Scheme, ML, OCaml, etc.

2. Functions as first-class citizens: This is probably the only hard and fast requirement.

3. No Side Effects: Not required. Again, see Scheme, ML, and Rust.

4. Static Linking: Certainly not, but the author seems to mean static binding, which is more important. However, a functional language doesn't actually need any binding aside from function invocation (see Lambda Calculus). `Let` bindings are generally available and very useful.

5. No explicit flow control: Function invocation is flow control. Loops aren't functional, but some functional languages have them.

6. No commands/procedures: If the author means "no top-level function definitions" that is clearly not true. Some functional languages even have macro languages.

This article gives the (incorrect) impression that functional programming is about a set of restrictions you must follow 100% of the time. Functional programming is a style that can be used in any language, as long as you can pass functions around as values. It wasn't pretty, but Java <=1.7 could still support a functional programming style by using `Callable` objects.

The `map` and `reduce` operations are certainly possible in imperative languages. Python has them built-in, they can be written in C++, and so on.

๐Ÿ‘คccurtsinger๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

There are quite a few extra issues that come up when you implement a usable allocator. I'm surprised the article didn't mention them. Here are just a few:

Alignment: different systems have different minimum alignment requirements. Most require all allocations are 8 or 16 byte aligned.

Buffer overruns: using object headers is risky, since a buffer overrun can corrupt your heap metadata. You'll either need to validate heap metadata before trusting it (e.g. keep a checksum) or store it elsewhere. This also wastes quite a bit of space for small objects.

Size segregation: this isn't absolutely essential, but most real allocators serve allocations for each size class from a different block. This is nice for locality (similarly-sized objects are more likely to be the same type, accessed together, etc). You can also use per-page or per-block bitmaps to track which objects are free/allocated. This eliminates the need for per-object headers.

Internal frees: many programs will free a pointer that is internal to an allocated object. This is especially likely with C++, because of how classes using inheritance are represented.

Double/invalid frees: you'll need a way of detecting double/invalid frees, or these will quickly lead to heap corruption. Aborting on the first invalid free isn't a great idea, since the way you insert your custom allocator can cause the dynamic linker to allocate from its own private heap, then free these objects using your custom allocator.

Thread safety: at the very least, you need to lock the heap when satisfying an allocation. If you want good performance, you need to allocate objects to separate threads from separate cache lines, or you'll end up with false sharing. Thread segregated heaps also reduce contention, but you need a way of dealing with cross-thread frees (thread A allocates p, passes it to B, which frees it).

The HeapLayers library is very useful for building portable custom allocators: https://github.com/emeryberger/Heap-Layers. The library includes easily-reusable components (like freelists, size classes, bitmaps, etc.) for building stable, fast allocators. HeapLayers is used to implement the Hoard memory allocator, a high performance allocator optimized for parallel programs.

๐Ÿ‘คccurtsinger๐Ÿ•‘11y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

There is no "killer input" for randomized quicksort. I'm surprised libc doesn't select pivots randomly already.
๐Ÿ‘คccurtsinger๐Ÿ•‘11y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It's not just a shuffled heap, it's also sparse. Section 4.1 covers heap overflow attacks, with an attacker using overflows from one object to overwrite entries in a nearby object's vtable. Because the objects could be anywhere in the sparse virtual address space, the probability of overwriting the desired object is very low (see section 6.2).

The same reasoning applies to reads. If sensitive objects are distributed throughout the sparse heap, the probability of hitting a specific sensitive object is the same as the probability of overwriting the vtable in the above attack. The probability of reading out any sensitive object depends on the number of sensitive objects and the sparsity of the heap.

There are also guard pages sprinkled throughout the sparse heap. Section 6.3.1 shows the minimum probability of a one byte overflow (read or write) hitting a guard page. This probability increases with larger objects and larger overflows. You can also increase sparsity to increase this probability, at a performance cost.

๐Ÿ‘คccurtsinger๐Ÿ•‘11y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

You could do better with a probabilistically secure allocator instead: http://people.cs.umass.edu/~emery/pubs/ccs03-novark.pdf

Randomized allocation makes it nearly impossible to forge pointers, locate sensitive data in the heap, and makes reuse unpredictable.

This is strictly more powerful than ASLR, which does nothing prevent Heartbleed. Moving the base of the heap doesn't change the relative addresses of heap objects with a deterministic allocator. A randomized allocator does change these offsets, which makes it nearly impossible to exploit a heap buffer overrun (and quite a few other heap errors).

๐Ÿ‘คccurtsinger๐Ÿ•‘11y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Daniel claims that volatility is actually lower with HFT when you look at implied volatility, a predictive measure of volatility. Lewis uses actual historical volatility in his argument. You shouldn't use a predictive measure when analyzing past performance:

Implied volatility is computed base on the difference between an option's selling price and an algorithmically-determined price. That's exactly the sort of information any trader would rely on to place bets. When the calculated value is higher than the price, there is profit to be made. Any decrease in trading latency will allow traders to buy up instruments selling below their estimated value a bit quicker, driving these numbers closer together.

๐Ÿ‘คccurtsinger๐Ÿ•‘11y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Why? I see no problem with a position of "this is bad."

And since this is a discussion about a book, it's worth pointing out that the book is focused around an alternative proposal for market design.

๐Ÿ‘คccurtsinger๐Ÿ•‘11y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Little's law certainly applies here. The article's author does have a good point though: if there is no queue and requests are processed one by one, your only option is good old serial program optimization.

Still, I think most large systems where latency is a major concern have some level of concurrency and/or queueing that can be manipulated to reduce latency.

๐Ÿ‘คccurtsinger๐Ÿ•‘11y๐Ÿ”ผ0๐Ÿ—จ๏ธ0