i2om3r

๐Ÿ“… Joined in 2017

๐Ÿ”ผ 40 Karma

โœ๏ธ 12 posts

๐ŸŒ€
12 total posts
Stories0
Comments12
Ask HN0
Show HN0
Jobs0
Polls0

(Replying to PARENT post)

> The same [1] claims in "Visual Summary" that the "cycles/byte" is 1 for various PRNGs but http://xoshiro.di.unimi.it/ seems to show that the reason splitmix64 is not preferred everywhere is that xoroshiro128+ is roughly two times faster than splitmix64 .

I have tested xoroshiro128+ vs splitmix64 in several procedural generation & simulation code bases in C and Swift. I could never confirm the numbers on http://xoshiro.di.unimi.it/. In fact, splitmix64 was slightly faster in all my tests with different optimizations enabled. I always assumed that's because its state only occupies a single register which certainly matters in practical applications (especially in C with its restricted calling conventions). I am not absolutely sure whether that was always the reason, though.

๐Ÿ‘คi2om3r๐Ÿ•‘7y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

You seem to be talking about a different kind of leakiness. In my mind, there are two kinds: conceptual and performance leakiness. You are talking about the latter. Pretty much any non-trivial system on modern hardware leaks performance details. From what I understand, git's UI tries to provide a different model that the actual implementation but still leaks a lot of details of the implementation model.
๐Ÿ‘คi2om3r๐Ÿ•‘7y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Which book(s) would you recommend?
๐Ÿ‘คi2om3r๐Ÿ•‘7y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I guess what I mean is styles besides gothic (undecorated strokes of even thickness) and those that mimic traditional calligraphy. I see a lot of other forms for Latin scripts, but for Arabic, CJK and Indian scripts most typefaces fall into the above two categories. I might be wrong though, maybe I am just not exposed to a lot of variety. I do find it most notable in mixed script printed text, e.g., Arabic, where the predominant form seems to be Naskh, which looks like calligraphy, and Latin, which typically doesn't look like calligraphy at all. This mix creates an image that looks very uneven to me, similar to when people use too many different typefaces in Latin text. Actually, I am not even sure whether the typographic style is dictated by Naskh, or whether its just the form of writing.
๐Ÿ‘คi2om3r๐Ÿ•‘7y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I didn't phrase it well. I was specifically talking about the Baloo sample and how the typographic style is the same across scripts. Sorry, I am a layman, so I don't know the proper terms, but I mean the similar stroke weight and curvature, strokes endings being pointy on one side and round on the other etc.
๐Ÿ‘คi2om3r๐Ÿ•‘7y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I always wondered why many non-Latin (mostly cursive) scripts have little variation across different typefaces. Maybe I wasn't looking hard enough? Well, the article mentions a similar observation by a Sri Lankan typographer, so I guess I am not alone. Can someone maybe point me to other non-Latin typefaces that have "their own typographic style"? I found the Baloo samples (last one in the article) refreshing. The style of the Tamil and Devanagari samples is very close to the Latin sample. For the Mina samples (first figure in the article), I can see that they try to capture the character of the Exo Latin typeface, with certain strokes getting narrower towards the end and its superelliptic curves (are there typographic terms for these?). I am not used to reading Bengali, but the style of the sample looks like it is in a font that has a different weight.
๐Ÿ‘คi2om3r๐Ÿ•‘7y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

In addition to the higher unit count there are a few bullet points in the article that indicate where saved/additional time is spent. From the sound of it, most importantly: "The DE pather was modified to have a much higher max iteration count than Age1's, so longer and more complex routes can be found."
๐Ÿ‘คi2om3r๐Ÿ•‘7y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

For reference, here is the video: https://youtu.be/MtzCLd93SyU?t=19m28s
๐Ÿ‘คi2om3r๐Ÿ•‘8y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

You can explain things with more than a single picture. You can also play with pictures in your mind, and you can encourage students to do so by giving them more than one picture. Visualizing corner cases is often very useful. For triangles, you can change angles and side lengths and see which facts still hold and which don't and how certain values/function results develop.

Moreover, it very much depends on who your target group is. My brain, for example, works entirely inductively (in the beginning). I won't be able to develop an intuition of something if I don't start with examples. Pictures are often good examples. During my undergrad studies, my linear algebra prof was as critical about pictures as you and other commenters here. I hated it. I was never able to get an intuition about the more abstract topics until I saw concrete examples including pictures in later lectures and projects. Moreover, not everyone is going to be a theoretical mathematician or quantum physicist. I suspect that by not showing pictures, you usually lose more students along the way than pictures would ruin students that need a fully abstract understanding (later). It would be interesting to see some data on this, but I guess its going to be difficult to collect.

๐Ÿ‘คi2om3r๐Ÿ•‘8y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I can't say anything about JSON serialization performance as I don't have experience with it. Your other points, though, seem a little handwavy to me. Or maybe I am reading them wrong.

> A tiny, somewhat extreme example: Swift allocates local variables on the heap. (...)

Have you ever encountered a local variable that spuriously didn't get stack promoted? I haven't. As I said elsewhere, I regularly read the generated code for my hot loops. Also, when profiling with Instruments, I have never been surprised by a heap allocated local variable that didn't escape. I also don't see why stack promotion theoretically would be a less precise analysis then doing it the other way around. I imagine that if the optimizer misses to promote a local variable, it would be a bug in the same way it would be a bug if an escaping local variable spuriously didn't get boxed (for compilers working the other way). Just that it won't fail at runtime, which might increase the potential for undiscovered bugs. But again, have you ever been bitten by this?

> And of course generics are implemented via witness tables, so indirect dispatch at roughly similar costs to objc_msgSend()

Generic types are opportunistically specialized and in my experience, the optimizer has gotten a bit better in that regard. I find that a nice compromise between C++ and, say, Java. You can also influence the optimizer's decision with various not-yet-stable annotations (@specialized, for example). Sure, if you want to write reliably fast generic code in Swift, you need to know a few things. None of the above is possible in Objective-C, though, because of its type system.

> I just saw something about blocks always causing heap allocations (and this is corroborated by an attempt someone made to port some HTTP parsing code from C to Swift. Even with max. optimizations and inline craziness, it was ~3x slower).

If by blocks, you mean closures, then yes, they are heap allocated if they escape. For non-escaping closures, there is always a way to force an inline unless you pass them to compiled third-party code. Cross-module optimization is an area that is still being worked on. Without knowing anything about the code in the benchmark, from your description, it sounds to me that there is unused potential for optimizations, either by making the code more idiomatic and/or by using one or two annotations. Which brings me to my last point.

> What benchmarks are you looking at?? While it wouldn't be true that I've never seen a Swift advantage, it's pretty close to never.

Do you have links? Not that I looked too thoroughly, but I have never encountered a benchmark comparing Swift with Objective-C (or other languages?) that both (1) showed significant worse performance for Swift across the board and (2) that I trust. Most recent code I have seen that does not perform well could fairly easily be improved or would have to be rewritten in more idiomatic Swift. I specifically say most, since there certainly is still room for improvement, but in my experience it is nowhere as bad as your comment suggests.

๐Ÿ‘คi2om3r๐Ÿ•‘8y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Did you compare it to a roughly equivalent Objective-C program? The top four entries in your profile data show retain, release and objc_msgSend. You will have those in Objective-C too. Maybe to a different degree? That's also why I am asking whether you have similar Objective-C code to test. Or maybe it's just that unoptimized Swift code is slower and optimized code is faster?

Compile times are a related but different matter. There are -warn-long-function-bodies and recently -warn-long-expression-type-checking which are really helpful and can give you an idea where most of the compile time is spent. In my experience, the type checker can spend a lot of time in mildly complex expressions involving overload resolution, which can be really annoying but there are often ways around it. With those culprits being eliminated, I have never encountered 5 minute build times for projects of this size or bigger, and I like to imagine that I write fairly generic code.

๐Ÿ‘คi2om3r๐Ÿ•‘8y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I can't speak for other people's code, but I regularly profile and read the machine code generated by the Swift compiler, at least for my hot loops. If you know what you are doing (use the right annotations and optimizer flags), even fairly generic code often compiles down to something that comes very close to what an optimizing C compiler would generate. Sometimes it generates even faster code, because the Swift calling convention can make better use of available registers. Sure, there are situations where it generates less optimal code, but generally generic idiomatic Swift is on a different level than a (non-profiling) compiler for idiomatic Objective-C can ever come close to. That's my experience.
๐Ÿ‘คi2om3r๐Ÿ•‘8y๐Ÿ”ผ0๐Ÿ—จ๏ธ0