beetwenty

๐Ÿ“… Joined in 2018

๐Ÿ”ผ 133 Karma

โœ๏ธ 46 posts

๐ŸŒ€
15 latest posts

Load

(Replying to PARENT post)

Std::string half of all allocations in the Chrome browser process (2014) [0]

The key difference is that web browsers support a highly arbitrary and mutable dataset. A game engine's assets exist in a mostly-static space. The things that are allocated at runtime are things that should have a known maximum, because going over that maximum will start to overrun latency targets. Many assets are streamed, but still hit a certain size and bandwidth budget, and so are still "basically static" - some degree of compilation and configuration at runtime always takes place for rendering features. The genuinely mutable part of game state while playing is in a comparatively confined space, and that allows a lot to be pushed to build time, where it's easier to validate and to maintain.

The features that change this picture are editors and arbitrary data imports. Web browsers are all about these two things. When you click a link the document may load thousands of gigantic images, and I might try to copy-paste the entire contents of Wikipedia into a text box. The engineering requirements are much broader as a consequence, and there are more rationales to need genuine "black box" interfaces supporting a complex protocol, as opposed to a static "calling function switches on a specified enum" approach, which is sufficient for almost every dynamic behavior encountered in game engines.

[0] https://news.ycombinator.com/item?id=8704318

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

That's a data synchronization error across multiple related pieces of data, which isn't the same as a POD container like a hashtable corrupting itself.

The standard hammer you would apply to enforce the synchronization in all cases is relational integrity, which is too expensive for a game's runtime environment. You don't always want to synchronize everything all of the time if you want to hit a high framerate target, and a lot of performance features boil down to relaxations on when synchronization occurs. Much of the detailed design in writing a game main loop is in dealing with the many consequences of supporting that.

That's why their recommendations on errors also refer to the earlier build process and Lua integration; by the time the data hits the inner loops of the engine, there shouldn't be a case where it's invalid, because if it is, then you can't have the optimized version either.

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

On the one hand, this is a predictable outcome if you are trying to shepherd a large codebase through a fast-moving language. Idiomatic Python 1.6 looks dramatically different to idiomatic Python 3.x.

On the other, Rust isn't the language I would want to write lots and lots of code in either. There are a few projects and organizations where it makes sense to do so(namely web browsers, databases, and other kinds of "deep backend, large surface area" types of projects), but most of the things it does well also act as a hindrance to feature development, compared with an idiomatic Java, C# or Go equivalent.

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I found my way out.

The thing I had to do is to find some themes that I am idealistic about and stick to those. The project is just a mode of exploring the theme, which means that each project and my skillset grows as needed to accommodate. The projects you are describing are completely non-thematic and are just bundles of features, so of course there's no structure to them, no reason to keep going and seeing what's next. And you are probably not money-and-sales-motivated, which is the thing that drives a lot of obvious business ventures.

The first step in finding the theme is in "knowing thyself", of course - strengths, weaknesses, inclinations. Write and rewrite the set of things about yourself that is maximally coherent and self-reinforcing. Then drive down that road as far as you can go: What types of projects does that support? Gradually you'll hit on a common theme, and then you can really start building.

Another way to force this along is this art advice: "Draw the same thing every day." This is a rather crushing challenge to take on, for no matter the subject matter, you'll tire of it, but it quickly brings out your inclinations and therefore the themes you want to work with.

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I'm working with Lua right now(gopherlua) as a scripting option for real-time gaming. I've done similar things to your story in the past with trying to make Lua the host for everything and I'm well aware of the downsides, but I have a requirement of maintaining readable, compatible source(as in PICO-8's model) - and Lua is excellent at that, as are other dynamic languages, to the point where it's hard to consider anything else unless I build and maintain the entire implementation. So my mitigation strategy is to do everything possible to make the Lua code remain in the glue code space, which means that I have to add a lot of libaries.

I'm also planning to add support for tl, which should make things easier on the in-the-large engineering side of things - something dynamic languages are also pretty awful at.

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It's basically true of anything using physics. There are very good pinball simulations, and being reliable digital games they are actually a better, more fair competitive venue, but people who really play pinball still crave the real game because of all the analog parts of it, the nuance of pressing your weight down on the table and how that changes across different games, and how you adjust your play to the specific conditions as things wear down.
๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Or, in another word, "disposability". We have a lot of systems that aren't repairable, don't get debugged, don't have things fixed mid-flight.

And...it works, with respect to most existing challenges. Restarting and replacing is easy to scale up and produces clear interface boundaries.

One way in which it doesn't work, and which we still fail, is security. Security doesn't appear in most systems as a legible crash or a data loss or corruption, but as an intangible loss of trust, loss of identity, of privacy, of service quality. We don't know who ultimately uses the data we create, and the business response generally is, "why should you care?" The premise of so many of them, ever since we became highly connected, is to find profitable ways of ignoring and taking risks with security and to foster platforms that unilaterally determine one's identity and privileges, ensuring them a position as ultimate gatekeepers.

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

D3 does the Right Thing with respect to dataviz.

That means: it's conceptually overloaded for doing quick-and-dirty conventional charts, where you just want to plug in a few parameters and have it "just work" - but excellent once you need to customize and make it work with your specific requirements.

That it has the concepts, and a clear notion of them, is the critical difference. Most libraries, most of the time, don't add new concepts, they just have a premade black box of features and functions. Sometimes you want a premade black box, but often you want to open up the box shortly afterwards, and that creates the inevitable trend towards either remaking it as your own box, or being one of hundreds of people who gradually grow it into a monstrosity that does everything.

But a library that is concept-focused doesn't have to get that much bigger: it's just another kind of interface, like a programming language or an operating system, and that puts it on a more sustainable track.

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

A representative for the union that represents more than 19,000 academic workers across the University of California system said she was surprised by the university's decision.

"We are shocked by UC's callousness, and by the violence that so many protesters experienced as they peacefully made the case for a cost of living increase," said Kavitha Iyengar, president of UAW Local 2865, in a statement. "Instead of firing TAs who are standing up for a decent standard of living for themselves, UC must sit down at the bargaining table and negotiate a cost of living increase."

Last week, the university filed an unfair labor practice charge against the union, claiming the union has failed to stop the wildcat strike by the graduate students as it is required to do by the collective bargaining agreement.

The union responded by filing its own unfair labor practice charge, alleging the university has refused to meet with the union to negotiate a cost of living adjustment.

Direct from the article. Care to back up your statement?

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The tradeoff in goto-vs-exception is that a goto needs an explicit label, while an exception allows the destination to be unnamed, constrained only by the callstack at the site where it's raised.

That makes exceptions fall more towards the "easy-to-write, hard-to-read" side of things; implied side-effects make your code slim in the present, treacherous as combinatorial elements increase. With error codes you pay a linear cost for every error, which implicitly discourages letting things get out of hand, but adds a hard restriction on flow. With goto, because so little is assumed, there are costs both ways: boilerplate to specify the destination, and unconstrained possibilities for flow.

Jumping backwards is the primary sin associated with goto, since it immediately makes the code's past and future behaviors interdependent. There are definitely cases where exceptions feel necessary, but I believe most uses could be replaced with a "only jump forward" restricted goto.

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

That isn't a "normative" statement about better, but a "positive" statement about a trade-off of expressive power versus rigor. A bit of mathematical intuition suggests that formally asynchronous approaches tend to be less powerful and hence more rigorous. But if your spec is still in the prototype phase, taking on a lot of expressiveness and permission with respect to your domain model is desirable because it gets you an end-to-end solution sooner.

What is good is not "on time" or "robust", but "on time and robust". Necessary and sufficients.

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

When I see articles with so many various facts assembled together, I have to remind myself that the author cannot possibly be an expert in all of them, and so the story is just that - a story.

Which doesn't change the fact that something needs to be done, or that many of the predictions will come to pass, just, the full story is never going to work out quite how anyone expects. And that's enough to have some hope.

๐Ÿ‘คbeetwenty๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This is probably only true if you're thinking of the complexity in terms of board-game style complexity, with characters and items and abilities that you can enumerate in a list. That's the type of thing that digital computers are pretty good at doing and analog systems aren't, so in our digitally-soaked culture we tend to appreciate it more and associate it with being "more complex". But it also hit a saturation point in the 90's, right around the time SMW came out in fact - there are plenty of early 90's PC wargames and RPGs that are just baffling to play because they model the playspace in a way that makes for spreadsheet UI.

Detailed physics and AI, on the other hand, is a thing that is largely beyond the ability of the SNES platform, and that's something that started to pick up along with 3D gaming. We don't greatly appreciate these things in video games because we can pretty easily play with blocks and balls or find live opponents in the real world, but it's a realm that still has a lot of untapped potential.

๐Ÿ‘คbeetwenty๐Ÿ•‘6y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

A lot of the facets of leadership are created from the small details of how the organization "automates" itself.

Consider meeting planning, for example. How frequent meetings are, how long they are, how many people are involved, what the meeting venue is, and how the agenda and format is set.

There are all sorts of knobs to turn just in saying "what a meeting should look like" that will impact the flow of communication, and different teams in the same organization will tend to have different meeting styles, but at a high level, the planning has to include considerations around how to allow different teams to interact effectively, since those are the bottlenecks where the information tends to get siloed.

And then you can turn to hiring, assignments, training and promotions and there's a similar kind of thing, where the same person in a slightly different role may be hugely more or less effective, and defining the problem differently changes the kinds of assignments and skills needed. Who creates those definitions, and how? It's not necessarily the manager those employees report to that's creating them.

In fact, there's a whole cascade of effects that come from the macro situation that end up translating into differently defined roles: different legal and regulatory requirements, education and training standards, minimum wages, healthcare coverage, labor organization efforts, etc. The same people in a different country may be happier and more effective.

So, while the manager is the biggest factor in the equation, it's not all on them - it can't be. Some ways of doing business and types of company culture will work in some scenarios and others will not, and the marketplace has an evolutionary tendency to just make random permutations of management style until it finds one that doesn't die, even if it creates a toxic environment. In that light, "effective" management is a highly relative thing and can be encouraged or discouraged by the broader shape of the economy.

๐Ÿ‘คbeetwenty๐Ÿ•‘6y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The sense I get from all of it is that there's a phase shift in technological progress occurring, not dissimilar to the 1970's one where we ceased seeing the bulk of disruptive developments occurring in vehicles and energy, and started seeing them in IT fields. That initial boom produced a lot of new product categories, and then successive decades saw a lot of consolidation and transformation away from nuts and bolts and simple efficiencies towards services and IP holdings and consumer marketing. And we know the nature of that market very well today since everyone present has breathed in it for most or all of our lives at this point.

But the nature of the next market is, of course, much harder to glimpse. Like looking into pier glass, it takes effort to get a fragmented view of things. It's not as easy as Thiel's view: AI and surveillance, for example is more broadly authoritarian than it is any specific ideology. And infrastructure often leapfrogs stages in the developing world, and that can be true of a developed country like the U.S. too.

There's plenty of good work being done quietly in the current market. The word I'd pay the most attention to is sustainability, though. It's been more-or-less entirely a marketing term for decades, never delivering on the promise, but then, often it takes a while for the products to catch up to the marketing.

๐Ÿ‘คbeetwenty๐Ÿ•‘6y๐Ÿ”ผ0๐Ÿ—จ๏ธ0