jwmerrill

โœจย Physicist, probability and geometry enthusiast, and software developer at https://www.desmos.com.

You can reach me at jwmerrill@gmail.com

๐Ÿ“… Joined in 2012

๐Ÿ”ผ 1,598 Karma

โœ๏ธ 274 posts

๐ŸŒ€
15 latest posts

Load

(Replying to PARENT post)

โ€œPassive voiceโ€ is a grammatical term.

โ€œAmazon confirms 14,000 job losses,โ€ is not an example of the passive voice.

โ€œ14,000 workers were fired by Amazon,โ€ is an example of the passive voice.

There is not a 1:1 relationship between being vague about agency and using the passive voice.

๐Ÿ‘คjwmerrill๐Ÿ•‘1y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The article is a four part series.
๐Ÿ‘คjwmerrill๐Ÿ•‘1y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This is also the inverse Gudermannian function [1]. That Wikipedia page has some nice geometrical insights.

[1] https://en.m.wikipedia.org/wiki/Gudermannian_function

๐Ÿ‘คjwmerrill๐Ÿ•‘1y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

My mistake! It seems clear in hindsightโ€ฆ
๐Ÿ‘คjwmerrill๐Ÿ•‘1y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The next pointer doesnโ€™t have to go first in the structure here. It can go anywhere, and you can use @fieldParentPtr to go back from a reference to the embedded node to a reference to the structure.
๐Ÿ‘คjwmerrill๐Ÿ•‘1y๐Ÿ”ผ0๐Ÿ—จ๏ธ0
๐Ÿ‘คjwmerrill๐Ÿ•‘1y๐Ÿ”ผ3๐Ÿ—จ๏ธ1

(Replying to PARENT post)

For problems in the plane, it's natural to pick two coordinate functions and treat other quantities as functions of these. For example, you might pick x and y, or r and ฮธ, or the distances from two different points, or...

In thermodynamics, there often isn't really one "best" choice of two coordinate functions among the many possibilities (pressure, temperature, volume, energy, entropy... these are the must common but you could use arbitrarily many others in principle), and it's natural to switch between these coordinates even within a single problem.

Coming back to the more familiar x, y, r, and ฮธ, you can visualize these 4 coordinate functions by plotting iso-contours for each of them in the plane. Holding one of these coordinate functions constant picks out a curve (its iso-contour) through a given point. Derivatives involving the other coordinates holding that coordinate constant are ratios of changes in the other coordinates along this iso-contour.

For example, you can think of evaluating dr/dx along a curve of constant y or along a curve of constant ฮธ, and these are different.

I first really understood this way of thinking from an unpublished book chapter of Jaynes [1]. Gibbs "Graphical Methods In The Thermodynamics of Fluids" [2] is also a very interesting discussion of different ways of representing thermodynamic processes by diagrams in the plane. His companion paper, "A method of geometrical representation of the thermodynamic properties of substances by means of surfaces" describes an alternative representation as a surface embedded in a larger space, and these two different pictures are complimentary and both very useful.

[1] https://bayes.wustl.edu/etj/thermo/stat.mech.1.pdf

[2] https://www3.nd.edu/~powers/ame.20231/gibbs1873a.pdf

๐Ÿ‘คjwmerrill๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

`<!DOCTYPE html>` is not an html opening tag. It is a preamble.

https://html.spec.whatwg.org/multipage/syntax.html#writing

๐Ÿ‘คjwmerrill๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Instead of differentiating c^(-xn) w.r.t. x to pull down factors of n (and inconvenient logarithms of c), you can use (z d/dz) z^n = n z^n to pull down factors of n with no inconvenient logarithms. Then you can set z=1/2 at the end to get the desired summand here. This approach makes it more obvious that the answer will be rational.

This is effectively what OP does, but it is phrased there in terms of properties of the Li function, which makes it seem a little more exotic than thinking just in terms of differentiating power functions.

๐Ÿ‘คjwmerrill๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

From later in [1]

> Mind that all of this does not impose how we actually scale temperature.

> How we scale temperature comes from practical applications such as thermal expansion being linear with temperature on small scales.

An absolute scale for temperature is determined (up to proportionality) by the maximal efficiency of a heat engine operating between two reservoirs: e = 1 - T2/T1.

This might seem like a practical application, but intellectually, itโ€™s an important abstraction away from the properties of any particular system to a constraint on all possible physical systems. This was an important step on the historical path to a modern conception of entropy and the second law of thermodynamics [2].

[1] https://physics.stackexchange.com/a/727798/36360

[2] https://bayes.wustl.edu/etj/articles/ccarnot.pdf

๐Ÿ‘คjwmerrill๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

KaTeX supports server side rendering to an html string. If you do this, the client only needs to load the css component of katex, and not the js component.

I believe MathJax has a similar capability.

๐Ÿ‘คjwmerrill๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Entropy in Statistical Mechanics is a quantity associated with a probability distribution over states of the system. In classical mechanics, this is a probability distribution over the phase space of the system.

Two probability distributions with different entropy can both assign finite probability density to the same state, so an increase in entropy does not preclude the possibility of the system returning to its initial state.

A great deal of confusion about entropy arises from imagining it as a function of the microstate of a system (in classical mechanics, a point in phase space) when it is actually a function of a probability distribution over possible states of a system.

A further wrinkle: Liouville's Theorem [0] shows that evolution under classical mechanics is _entropy preserving_ (because the evolution preserves local phase space density, and entropy is a function of this density). An analogous result applies to quantum mechanics. However, a simple probability distribution parametrized by a few macroscopic parameters rapidly becomes very complex as it evolves in time. When we imagine the entropy of an isolated classical system increasing over time, the meaning is that if we want to model the (very complicated) evolved probability distribution with a simple probability distribution (describable in terms of a few macroscopic parameters), the simple distribution must have entropy greater than or equal to the complex evolved distribution, which is equal to the original entropy before evolution.

It's difficult to reconcile the idea that entropy is a function of a probability distribution (not a function of a system's microstate) with the idea that Thermodynamical entropy is an experimentally measurable (kind of...) property of a system. Jaynes' "The Evolution of Carnot's Principle" [1] is the clearest description I've seen of the relationship between Thermodynamic entropy and Statistical Mechanical/Information Theoretical entropy. Many of Jaynes' other papers [2] on this topic are also illuminating.

[0] https://en.wikipedia.org/wiki/Liouville's_theorem_(Hamiltoni...

[1] https://bayes.wustl.edu/etj/articles/ccarnot.pdf

[2] https://bayes.wustl.edu/etj/node1.html

๐Ÿ‘คjwmerrill๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0
๐Ÿ‘คjwmerrill๐Ÿ•‘3y๐Ÿ”ผ2๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Raku interprets decimal literals (like 0.1) as limited-precision rational numbers (Rats) [0-1].

I think this is a pretty user-friendly compromise.

[0] https://docs.raku.org/syntax/Number%20literals

[1] https://docs.raku.org/type/Rat

๐Ÿ‘คjwmerrill๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

That paper discusses ways of computing a matrix exponential, rather than ways of computing polynomial zeros.
๐Ÿ‘คjwmerrill๐Ÿ•‘4y๐Ÿ”ผ0๐Ÿ—จ๏ธ0