Saturday, September 30, 2023

Dating the Arrival of Modern Humans in Asia

IN 2009, WHEN OUR team first found a human skull and jawbone in Tam Pà Ling Cave in northern Laos, some were skeptical of its origin and true age.

When we published a timeline in 2012 for the arrival of modern humans in mainland Asia around 46,000 years ago based on the Tam Pà Ling evidence, the skeptics remained.

In short, the site was given a bad rap. One of the most interesting caves in mainland Southeast Asia was frequently overlooked as a possible route on the accepted path of human dispersal in the region.

However, in recently published research in Nature Communications, we report more human remains found in Tam Pà Ling—and a more detailed and robust timeline for the site. This shows humans reached the region at least 68,000 years ago, and possibly as long as 86,000 years ago.

PLENTY OF EVIDENCE BUT HARD TO DATE

Our team of Laotian, French, U.S., and Australian researchers has been excavating at Tam Pà Ling for many years. Find a detailed, interactive 3D scan of the site here.

As we dug, we found more and more evidence of Homo sapiens at earlier and earlier times.

First there was a finger bone, then roughly 2.5 meters deeper, a chin bone, then part of a rib. In total, eight pieces were found in only 4.5 meters of sediment, which may not sound like a lot, but this is huge in archaeological terms.

Surely, we thought, this would be enough for Tam Pà Ling to take its place among the early human arrival sites in Southeast Asia.

But a hurdle remained: The cave is hard to date. This has prevented its significance being recognized, and without a convincing timeline, the cave’s evidence will not be included in the debate over early human movements.



from Hacker News https://ift.tt/p84Rr2c

America does not have a good food culture

Last month an off-the-cuff drunk tweet by a French guy, Philippe, living in America erupted into an online firestorm, starting weeks of dunks, arguments, and hot takes, that is still going on.

The responses fell roughly into two camps.

The first, smaller, and supportive camp, were not surprisingly other French who didn’t have much to say beyond “of course!” because what Philippe said was so obvious to them. I mean, duh, everyone knows that old joke,

“Heaven is where the cooks are French, the police are British, the mechanics are German, the lovers are Italian and everything is organized by the Swiss.

Hell is where the cooks are British, the police are German, the mechanics are French, the lovers are Swiss, and everything is organized by the Italians.”

It doesn’t matter that America is not part of Europe, because to Europeans America is worse at everything (except war), especially food.

The larger more vocal camp was Americans, mostly journalists, some angry, some amused, but both who had a lot to say, mostly in the form of quote tweet dunks. Things like, “Yeah. Sure. Try getting a Taco in Paris,” or “French food is great, if you only like baguettes and frog legs, but if you want Thai, Mexican, Japanese, and Cuban then F off.”

Then there were the funny tweets, of which the best IMO was,

My immediate take, which I tweeted against my better judgement, fell mostly into the French camp, although I tried to add a little nuance.

Otherwise I sat back and watched the fight with a smile, because that’s what I do with most twitter fights, but this one kept nagging at me, because as it unfolded and then somehow kept unfolding, I realized it was about two different cultures talking past each other, not at a thin (who has better food?) descriptive level, but at a thick level — as in, what does food, and eating, mean to us?

Which given how much I believe cultural differences matter, I came to see it as two groups effectively debating in different languages, both convinced they were right, because in their own way, both were.

The pro-US arguments boiled down almost entirely to pointing out that we have a huge diversity of restaurant choices. Which is a good argument, because that, at a descriptive and practical level, is the strength of US food culture

— that you can, if you really want to, get a reasonably priced good meal of almost any cuisine in the world.

That argument reveals a lot about how Americans, or at least well-to-do Americans on twitter, think about “good food.” They equate it with a diverse restaurant scene, which assumes having a bunch of different options trumps ubiquitous fresh food.

It is also a rather narrow definition, because having access to a lot of different restaurants serving a lot of different cuisines is currently only available in a handful of large and mid-sized cities.

Or, to put it another way, “eating well,” as defined in the US, is primarily a niche experiential thing (“let’s get Burmese, I’ve never had that before!”), mostly confined to the well to do and intellectuals, and isn’t central to broader US culture. Not yet at least.

The reality of food in America, outside of a few high-status neighborhoods scattered around the US, is that most people don’t prioritize the varied experiences of eating at a diversity of bespoke restaurants, and so the median food eaten in the US is not from some well reviewed Indonesian place on the Upper East Side, or from that really cool Bolivian place in Alexandria.

It’s far more mundane than that, far more processed, and far less social. The far more common reality of food experience in America is someone eating drive-through alone in their car, or eating wings at the Applebee’s bar while watching the game with friends, or heating up leftovers in a microwave before work.

It’s not all a hellscape of lonely meals of processed food, but relative to the rest of the world, it is.

I hesitated to write this piece because strictly speaking I can’t outright compare US and French food cultures, not honestly. I’ve only spent about two months in France, and that was in Paris over a decade ago. I’ve yet to walk France, although I will soon.

Still, I’ve spent enough time in places like Germany, Japan, Romania, Vietnam, Turkey, and Peru, to understand how different we are when it comes to food, and to see how little it matters to us, culturally speaking, compared to almost everyone else. Including surely the French.

Food in the US isn’t central to our identities, not yet at least

. It is still largely a utilitarian and transactional thing— something necessary to get enough calories, hopefully tasty ones, to stop being hungry and keep working. That’s very different from most of the world which views meals as a social occasion that’s far more than about getting fed. At the very best they aim to be something transcendent. As if they see almost everyday as a small Thanksgiving, at least in attitude, if not in the amount of food served.

For the data-heads, you can see the footprint of that in how much time per day people in different countries spend eating.

If you’ve lived outside of the US you don’t need data to tell you Americans eat quickly, and mechanically. We gulp our food down like starving beasts intent on moving onto the next thing, while everyone else takes their time, savoring the experience.

In Korea, tables rarely turn over. If I don’t get to my favorite Izakaya in Seoul by 6:30, I miss getting in, since groups will sit from six till closing, eating small dish after small dish, laughing, drinking, and generally being social.

That is even true of poorer places like Vietnam, where eating is an integral part of the culture, even though a lot of the food tracks as fast food.

The large outdoor bars and restaurants start filling up around five, and there isn’t really a lot of turnover. Tables will lose a few people now and then, but others will quickly join, and the large sprawling party ebbs and flows but almost never dies down until six hours later at closing time.

That’s just what you see out on the street. If you’re lucky enough to be invited into a family home, you see the same thing, but only with more food, longer hours, and more shots of liquor. “To your health! To my health! To Buddha’s health! To the cat’s health!”

The result of all this time spent eating together is people expect the level of the cuisine to rise to the occasion.

That is the second major difference between US food culture and the rest of the world: Farm to table isn’t just some marketing ploy for the wealthy, it’s what everyone expects, and it’s often what’s delivered. Sometimes in shocking fashion.

In a bar in Vietnam, with an attached grill, I watched a guy pull up in a moped piled high with bamboo cages, extract five screaming hens by their feet, carry them across the patio, slap them down on the cold cement floor still squawking, and proceed to cut off their heads, before handing to them to the cook yards away from me, who tossed them to two women sitting on their haunches, who plucked and gutted them. That was that evening’s special.

In Lima my daily lunch of seaweed soup and ceviche was made by three women who got their fish from a fish seller across the aisle who got it from boats that came into the docks that morning. You could walk the entire path of all the ingredients in the meal, from ocean to table, in under an hour.

And that’s in what’s considered a lower-class Lima neighborhood, not some fancy upscale part of town.

It is the same in Turkey, Germany, Jordan, and Romania, where the idea of anything but daily freshly made bread in the morning is considered a joke to throw out in the trash.

While you can get freshly made bread in almost any mid sized city in the US, that isn’t what most Americans eat for breakfast. At all. What is far more common is microwaved pre-packaged sandwiches shipped a few days earlier from a huge conveyor belt bakery a hundred miles away.

Of course, all this is a broad generalization. The US is huge with lots of variation, and a lot of subcultures, including lots of food subcultures.

Including a very strong barbecue scene, especially in the rural south.

That was the other amusing thing about the uproar over Philippe’s tweet: Almost nobody defending the US mentioned barbecue, which is odd because it’s arguably our strongest contribution to the global food scene, and one not limited to the wealthy — which is maybe why few mentioned it.

Yet if any US food scene best captures how the rest of the world, and France, understands food, it’s barbecue, where the joy of cooking is so deep, so about the social, that people do it not just for money, but for love.

Where a guy with zero credentials will spend twenty years perfecting a meal in the back of an old gas station, to impress his friends, family, and maybe, just maybe, make a few bucks out of it.

It also illustrates the difference between thin and thick culture, or the difference between doing something because you kinda like it, or because it’s a job, versus doing something because it’s central to your identity.

What does it mean to be central to your identity? I could drag out long quotes from Aristotle, Kierkegaard, Geertz, but the simpler explanation is it’s something you could put on your tombstone. It’s how you see yourself, and how you want the rest of the world to see you and remember you.

Now does barbecue really rise to that level? Would someone really put, “Made A Mean Brisket” on their tombstone? Would someone really insist, while on their deathbed, that they have a funeral repast cooked up by ten friends who toiled over hot charcoals for the prior two days?

If you live in the South, you know yes, that’s a thing, or at least something that could easily happen. If you don’t live in the South, watch a few episodes of “Man Fire Food” and you will get it. Barbecue is a life, not a hobby.

That, a life not a hobby, a craft not a job, a way of life not a means to calories, is the how the French (and almost all the rest of the world’s) views food. It is the attitude that Philippe most likely grew up with and understands, and doesn’t see in the US.

Which explains his tweet, and why he thought it was so self evident. Food is life, not a hobby. Not a bunch of experiences to collect to say, “look at all the crazy food I ate!”, but a bunch of experiences like all other experiences that adds to a full and complete life.

So to the haters of his tweet, my response is, sorry, Philippe is right, at least in the broad sense, but don’t get mad, don’t go hating on the French, instead go to Memphis, or rural Georgia, and go get some barbecue and celebrate what can be great about American food.

Then, if you’re still feeling angry, or if you’re drunk, tweet him a picture of your huge meal of ribs and slaw, and ask him if he can get that in Paris.


On Monday I leave for Asia for all of October, with most of my time in Mongolia. If Mongolia is anything like Bishkek, and I think it is at least when it comes to food, then I will have an example of my above thesis where although the food culture is important, at a social level, the food isn’t good. At least not to my tastes. Some of that is the whole fresh ingredients issue, some of that the result of a nomadic culture, and some of that is just bad taste.

I’ll also be spending a few days in Korea and then try to finish my walk across Japan. Until then, see you next week!




from Hacker News https://ift.tt/mgVMJRw

The Missing Middle in Game Development

What is the Missing Middle?

Other than he is my total hero, and he was born and he was raised in my hometown, I wanted to interview John Romero to try and solve the modern indie game development paradox. 

The paradox goes like this…

When John Romero was making games In the 1990s there was no digital distribution so you had to get a company like Softdisk to publish you and physically manufacture and distribute your games by mail! There was no Unity, Gamemaker, Godot, or Unreal so if you wanted to make a game you had to have a full-time person (in their case John Carmack) crafting your game engine from scratch (usually in C or machine code) every time. There was no widely accessible internet or cloud-based source-control or storage so you couldn’t work remotely. 

Yet, despite all the limitations of the 1990s, the guys at id Software could make 13 games in 1 year.

How is it that we have better tools which should make us more productive, and better communications that should increase collaboration, and digital distribution which has, essentially, 0 marginal cost, and yet games take longer to make? Indie game developers were supposed to be small, nimble, and quicker than the hulking, slow moving AAA goliaths. What happened?

IMPORTANT: this is not a lazy devs argument! Everyone is working their ass off these days. My concern is that when I talk to first time game devs they almost always tell me they are 2 years deep into their 4 year project. That scares me!

In today’s post I want to explore why new indie game developers plunge themselves into much longer game development cycles. The industry has essentially stretched us to cut out one of the most important stepping stones between tiny games and multi-year projects. This has caused developers to have incorrect assumptions, wasted resources, and burnout. 

(Also I wanted an excuse to write a post that included as many pictures of the Doom guys wearing jorts as possible)

From John Romero’s Smug Mug account  “Both Shawn Green and myself are big heavy metal maniacs!”

My theory: The missing middle

The modern indie game business model and development process has a missing middle problem. 

For the last decade or so, the business model of first time indie studios goes something like this: 

“We are a team of 3 developers and did a game jam, published it on itch.io, had fun, people seemed excited for our game, so we formed an LLC, signed with a publisher, and are planning to spend 3 years making our first game for which our goals are modest: we really only want to make $300,000.”

That mentality right there is the missing middle problem: These days, studios either make jam games that they hammer out in a weekend that they post to itch for free or they burn the ships, quit their job, and make multi-year mega projects that can only be profitable if they earn multiple hundred thousands of dollars. 

What is a middle game?

These are games that are bigger and more polished than a game jam game but are not huge, 30 hour epic triple-I indie game. A “middle game” should only take 1 to 9 months to create and can be profitable (or at least not a money sink) because it is expected to earn in the range of $10,000 to $40,000.

“Middle games” used to be the norm. The middle used to be the goal. The middle was how you built a career.

Unfortunately when games made by small teams started earning millions and people could buy as many jorts and race cars as they wanted. Everyone expected $300,000 payouts to be the baseline rather than the exception.

But now, to the modern indie dev business model, (especially the ones hanging out on r/gamedev): earning $40,000 or less is considered a complete failure and should be mocked publicly (this is them speaking, not me).

To the modern indie dev business model, dedicating just 3 months (or less) to your first project is shocking and when I say that they react as if I am asking someone to perform a backflip to the moon or to change their blood type using transcendental meditation. 

I believe the modern indie business model that skips over developing “middle games” is why 75% of studios on Steam have released 1 and only 1 game.

Why did this happen?

I did not have a chance to ask John whether this is a picture of Noel Stephens at Ion Storm laughing at John because he broke a table or because he has more denim below the waist. (Also Jeans & sandals was a thing we did in the 90s)

Indies stopped developing “middle” games because the industry stopped directly paying for them.

Quick history lesson:

In John Romero’s book Doom Guy, the first time he was paid for making a game was an Apple II game he published called Scout Search for inCider Magazine. The magazine paid him $100 for it ($300 in 2023 dollars). 

Here is a picture of the magazine:

(side note, get it? It is an Apple computer magazine called inCIDER? Get it?) 

Here is a picture of the game:

For early indie software distribution, magazines would literally reprint source code and people would retype that code into their computer to play the game. Obviously the games had to be tiny enough to be printed in a magazine and be manually retyped by a human. The payout was in the hundreds of dollars. But that was what the expectation was at the time.

Softdisk, the company John worked for before starting id Software, would pay developers a flat fee to the rights to ship their game on their monthly software disk. Softdisk negotiated a 6-game deal with John’s id Software for $5000 per game (about $11,000 per game in 2023 dollars). That was the expectation for games at that time.

In the days of flash games, flash portals like Kongregate, Armor Games, and Newgrounds paid indie game developers a flat fee to have their logo embedded in their game. For a decent game flash portals paid indies in the range of $10,000 to $30,000 ish dollars. That was the expectation for games at that time.

Back in the day, there was an income ceiling to indie game development. Publishers paid a small flat fee per game. Nobody was getting rich off one game, so developers were incentivized to make lots of games that were small, quick, and fun.

Back in the day, making a game and earning $15,000 from it was totally expected. Nobody said “FAILED GAME!” Nobody called it the indiepocalypse. That was just what games earned.

Here a quote from flash game developer who used these smaller games and eventually became a hit Steam developer of games like Coffee Talk:

I would make the art and a friend would do the code. We could make a simple game in a week and that could sell for $500 in sponsorships. We made the first Infectonator game in a month and that game got $5000 in sponsorship + $20K in performance bonuses. Some sites would offer performance bonuses where they would reward based on how many times the game was played, more plays means more ads traffic for the sponsor. It’s basically revenue sharing from the ads that the game generated in the sponsor’s site. We learned a lot about game design from those Flash days

Kris Antoni (Toge Productions

You can still play infectonator here.

Earning ONLY $25,000 for a game sounds terrible now, but when your game is made in a week, “middle games” were a great way to learn game development and build a career. 

“Middle-earning” games are still the norm people just don’t realize it

Gamalytic recently published a blog post that did some filtering to see what the “true” median income for Steam is. They filtered out cheap asset flip games that cost less than $5 and found that the median revenue is about $4000. 

They also filtered out games that cost less than $10 and found the median revenue is $17,000.

$17,000 in today’s dollars is basically what Softdisk was paying id Software per game back in 1990

And actually, the number of games that have earned $10,000 in the first 3 months is growing!

Graph from Gamalytic

So why is it so bad now that games earned $10,000 when 30 years ago it was the norm? Because now there is a slim chance to become super rich because of the indie-utopia.

The birth of the Indie-utopia

Everyone complains about the indie-pocalypse but not enough people consider that we just lived through an indie-utopia in the form of digital distribution and mega-earning games.

In 2004, digital distribution arrived to consoles with the original XBOX’s Xbox Live Arcade. Games like Geometry Wars were HUGE and earned millions.

For PCs, digital distribution always sort of existed but it kicked into high gear in 2005 when Steam opened up to 3rd party games. The first non-Valve game to appear on Steam was Ragdoll Kung Fu which you can still purchase today (it only has 53 reviews).

With the birth of digital distribution, suddenly a game produced by indies COULD earn millions of dollars in sales (emphasis on “could”) instead of just earning $10,000 paid up front by some platform like Kongregate. That documentary Indie Game The Movie almost seems like Xbox Live propaganda telling developers that YOU WILL make millions of dollars. 

Then in 2007 with the birth of the iPhone, and 2008 with the app store, and the Free To Play model, the flash game portals were dead. And without flash, nobody was paying developers a flat rate to make smaller games. Free To Play, and ad-supported games could also make MILLIONS OF DOLLARS. When Flappy Bird hit in 2014, everyone was whispering about how the developer was earning $50,000 per day in ad revenue. Numbers like that make the $10,000+ ceiling that flash portals paid out seem laughable. 

What developer would want to settle for $10,000 when they could earn 100 times that?

AAA became huge

At the same time that indies were getting million dollar payouts, AAA games flourished too. Games like Call of Duty, Bioshock, Grand Theft Auto, Skyrim, The Last of Us, and Dark Souls became mainstream. Games became more profitable than movies. 

People stopped going to arcades to play silly little action games. Instead, people could play beautiful AAA games at home on their consoles and PCs. 

Simultaneously, the Free to Play model changed game design so that retention was the most important metric. Free to Play would exploit quick hit arcade-style games like Bejeweled or Frogger and then bolted on a massive retention systems in the form of upgrades, powerups, and muli-currency schemes and birthed complex chimeras like Candy Crush and Subway Surfers. 

So quick hit arcade games (like John Romero published for inCider magazine) were no longer in vogue. 

Since 2001 with the XBOX and PS2, AAA games have become so amazing and so beautiful and so ubiquitous that an entire generation of gamers have grown up assuming that a game = 10-15 hour linear, cinematic experience with amazing graphics. 

Those young gamers who later became developers never had the chance to play silly little games that are worth about $10,000.

They never had a chance to experience what a “middle game” is. 

Platforms, publishers, and investment firms grew up too

When the rest of the industry started earning million dollar payouts, the 3rd-party businesses that funded the middle market grew up too. 

I keep my ear to the ground and most major indie game publishers and platforms (like XBOX Game Pass) won’t even consider funding games for less than $200,000. In fact I have heard about indies pitching the big publishers $250,000 budgeted games and the publisher coming back saying “Increase your budget and how much you are asking for and MAYBE we will publish your game.” So developers scaled their game up, doubled how much they were asking for and then they got funding.

Publishers want to see big payouts and they know that only comes from big budget games. 

The original game that indie developer pitched was totally fine before that publisher asked the developer to bloat the budget. The game would have probably been even better if they just left it at the size it was originally.

Publishers don’t have time to fund and support a game that is expected to earn $17,000 revenue. If the publisher negotiated a 30% revenue split that means that after Steam’s cut, taxes, etc, they would be earning less than $3500. That just doesn’t scale for publishers so middle tier games are worthless for them.

The exception became the rule

When games had the possibility of earning developers millions, every developer mentally EXPECTED games to do that well. Our expectations became outsized and distorted.

I am guilty of distorting this expectation too. I mostly write about games that are top sellers such as Dome Keeper, Peglin, Zero Sievert. To be fair, I do sometimes cover games that are medium hits. Such as Ravenous Devils, and all these more modestly selling opportunities. 

We forgot how to make small games

The burning of the Library at Alexandria

Basically when developers MIGHT earn millions of dollars from an indie game, everyone gets that number in their head as the EXPECTATION for releasing a game. Every game that doesn’t earn $1,000,000 must be a failure!

If you step back, the expectations that contemporary developers have for themselves is really is quite insane: 

Step 1: Try out a concept with a game jam game that you made in 2-5 days.

Step 2: Dedicate 3+ YEARS of your life to create a flawless masterpiece that will earn you $1,000,000. 

If you mess up Step 2, you wasted 3 years of your life and FAAAAAAAILED

This is crazy.

 (BTW Why does everyone on r/gamedev use the term FAILED so readily? Stop it).

What we can learn from our forerunners and the birth of id Software

My favorite sci-fi trope is when scientists uncover an ancient alien civilization that was super advanced but somehow managed to implode and the scientists dig up their lost ancient machinery so that we can learn from their long lost genius (see the Alien franchise film Prometheus, 2001 A Space Odyssey, Halo etc). 

We need to go back. 

Basically we need to dust off the id team’s jorts and figure out how to make smaller games in faster timelines (John Romero isn’t even that old so it shouldn’t be that hard to get back at it.)

DOUBLE JORTS!

Here are some lessons I think we can learn from our game dev forerunners:

Set expectations

Build your game assuming you will earn $10,000 NOT $1,000,000. How long should you build it with that goal? How much should you spend on art? What creative compromises will you make? 

Limits create great art.

Expect to release multiple games per year

Yes, it can be done. You don’t have to release 13 like id Software did in 1991 but you can release more than 1 game per year. 

Be aware of the market

I know it is uncool to make market-based-decisions that compromise YOUR TRUE ARTISTIC VISION (man) but successful developers have always done that. 

John Romero worked for Infocom and had experience making huge RPGs. He also loved deep RPGs like Chrono Trigger and Ultima. The guys at id Software played a lot of Dungeons and Dragons (John Carmack was the DM). But when John Romero co-founded id Software, he knew his small team couldn’t create deep, story-rich RPGs, so they purposely chose not to.

Similarly, despite early success with 2D platformers like Keen and Dangerous Dave, by 1991, Romero knew that the industry was transitioning to 3d games. So id Software changed their strategy to keep up with the times. 

“Guys, we should not be doing this game. We have this cool technology that we’re not using, that we’re not taking advantage of to make a better version of that game going forward.” 

John Romero to his id Software team, trying to convince them to give up 2D platformers and transition to 3D games

id Software could have spent YEARS heads down working on their magnum-opus game DOOM with the most advanced tech. But they didn’t have the runway, or the capital, or the expertise, or the audience to do that yet. Instead, they released several smaller games each one built on the same engine but with a slight improvement over the previous one. This allowed them to earn money while they researched better tech.

Spread research and development across several games 

Here is a brief history of games made by the id Software team that demonstrates how they used multiple releases to incrementally upgrade their research.

Catacomb  (Play it here in browser

Released 1989. This was John Carmack’s early 2D game for Softdisk. Here the basic gameplay is established such as basic dungeon crawling (aka Gauntlet) and attacking enemies with a weapon where you hold down the fire button to charge up. There are also keys, potions, spells, etc.

HoverTank 3D (Play it here in browser)

Released 1991, this is the first real 3D engine id Software created. Notice that HoverTank’s walls are all solid colors? At the time they didn’t know you could scale bitmaps to create textured walls. But who needs textures!?! Also the controls kinda suck and don’t feel responsive. But the basic UI is innovative for the time: it includes a radar that shows you where the enemies are (which is still used to this day in FPSs)

Catacomb 3-D (Play it here in browser

Released in 1991. This game builds off the HoverTank engine and adds textured walls (look at that cool brick pattern). The game also has improved controls, multiple weapons, keys, healing potions, and a new compass which is still used in modern FPSs and games like Skyrim. 

But notice that Catacomb 3-D still has the same basic game design as Catacomb 2D. It is still dungeon crawling, there are Bolts, Nukes, Heals, Keys and is score based just like the 2D version.

Also notice how the team has basically the same red brute sprites between HoverTank and Catacomb; they just gave them yellow eyes and toenails. 

Wolfenstein 3D (Play in browser here)

Released 1992. This is built on the same 3D engine as the previous games but now has VGA graphics instead of the smaller color palette of the EGA graphics that Catacomb 3-D used. 

Wolfenstein also added other gameplay mechanics such as secret walls, better weapons, and the ability to add sound triggers to the map that can alert remote guards when you shoot. 

Doom (Play it in Browser here)

Released in 1993 This game truly revolutionized the industry. It was built on an entirely new engine that incorporated everything John Carmack learned making HoverTank, Catacomb-3D, and Wolfenstein 3D. The game finally featured diminished lighting, sky textures, variable height floors, non-orthogonal walls and even a head bob when you run. 

Basically DOOM was not a game that came from thin air. It was built slowly over years through smaller releases, feedback from the marketplace, and constant iteration. 

If back then id Software used the modern indie all-or-nothing method of production they would have gone silent for 3 years with no market feedback, no funded R&D on their engine, no practice in 3D level design, no building an audience. It would have been designed in the dark and may have never been discovered or even made it to release.

Reuse tech infrastructure

Throughout the development of their dozens of games, John Romero created Tile Editor aka TED which was id Software’s in-house editor to create levels for many of their games. The editor was flexible enough that it could be reused over and over despite each game having slightly different engines. Over the years the tool created levels for their 2D platformer Commander Keen or simple 3D games such as Wolfenstein 3D. John Romero commented in this tweet that over 33 games were made using his TED v5 tool

TED was even used by other companies to make levels for their games. Here is a screenshot of TED being used to make levels for Apogee’s Rise of the Triad

The great Jorts vs Hammer-pants debates. The daily dilemma everyone in the 90s faced.

Learn what a small game is

Whenever I give a talk and say more indies should make a game in 6 months, people’s heads explode. They leave nasty comments on the Youtube video. IT’S IMPOSSIBLE! 

AAA games have warped our brains as to what the scale of a game is. Blockbuster indie games like Hollow Knight and Factorio have warped our brains as to what the scale of a game is. You cannot make those games in the short runway needed for “middle” games. 

We don’t even remember what small fun games feel like anymore. So I am going to show you REAL WORLD examples of actual indies who have actually made good quality, moderately profitable, non-asset-flip, games. 

Your homework assignment is to download these. You should play them. We need to relearn what “middle games” feel like.

Now you might look at these games and say “Chris this game FAAAAAILED it only earned 123 reviews!” Remember, we are living in the land of “Middle Games.” If you only spend 3 months on a game, 123 reviews can mean the game is quite profitable on a per-hour devtime. 

Real world, middle sized games

Cozy Bee

I am such a fanboy of Cozy Bee games. Most of the games by this 1-person team are developed in less than a year. Here are a few of them. They are in the right genre for Steam (management), they have the right graphical style (cozy), they are just so smart. 

Capybara Spa

Lemon Cake

Bunny Park

20 Minutes Till Dawn

I wrote about 20 Minutes Till Dawn in my blog post here. Ignore the fact that it made millions of dollars, the interesting thing is that it was initially released after fewer than 3 months of development time. 20 MTD was a “middle” break from a game that the developer had been working on for YEARS. Middle games to the rescue to stop burnout.

Sokpop

The Sokpop collective releases a game a month. Behind the scenes they are really four developers who work alone releasing their games typically after a staggered 3-month dev cycle. They have their own way of doing things and I would say go a bit too crazy switching up genres too fast and not doubling down on proven genres which is why you see such variation in their sales numbers. But who cares? They seem to be having fun and they are still releasing interesting art, so good on them. 

If you look at their titles, you will notice that their most successful games are the ones in Steam’s preferred genre: Crafty-buildy-strategy games. Those are

Pyramida

Stacklands

Tuin

Guardener

Luckitown

The less popular ones are typically platformers, linear narrative games, and super experimental avant garde genres.

Luck Be A Landlord

This anti-capitalist slot machine roguelike was a hit! But it didn’t have to be be due to the short production time.

I asked developer DiIorio about the development process:

The first public build of Luck be a Landlord took about 2 months from its initial concept to the demo release in the Steam Game Festival: Autumn Edition.

I originally was working on it for this game jam: https://ift.tt/9lNgBcM before I expanded it into a bigger project.

Dan DiIorio

You can see his original tweet here:

SNKRX

SNKRX was developed in months and you can actually follow along the developers process with this day-by-day dev log and more game marketing and making thoughts on their main blog.

SNKRX, and Vampire-survivor likes are a great example of one way to make quick games that Steam actually pays attention to is to take a classic old arcade game or flash game and add a roguelike meta progression on top of it. 

City builders

City builders and civilization style games are the biggest genres on Steam but developers always say “Well those take YEARS to make.” But they don’t have to take long if you focus on finding what the fun actually is behind them and stripping away everything else. A perfect example of this is Islanders which is a minimalistic city builder that took the team only seven months to develop.

Horror games

Horror games are a very very popular genre with gamers on Steam and this developer made Backrooms Apprehension  and Don’t Look Away in 1.5 months each. 

Here is a quote from them on this reddit thread.

“I’ve made two games so far (DON’T LOOK AWAY and BACKROOMS:APPREHENSION), they took me about one month and a half each, still updating them but I did release them in early access and they are doing pretty good, not millions but that wasn’t my goal, I just needed to make enough to quit my job and to allow me to keep making games and I’ve succeeded.“

DMNT Interactive

But Chris! You wrote earlier that Steam hates small games!

That is correct! 2 Years ago I said that the Steam algorithm is built to hide smaller games (See even Steam is built to suppress the “missing middle”)

It is true that you cannot rapidly release many small games and expect to make millions. But that is ok. Remember, our brains have been AAA and III warped. Indie Utopia warped our brains.

The whole point of middle games is to improve your tech, get your name out there a bit more, build a back catalog, practice launching, and just have some lower-stakes fun. If your game that took 3 months to build did fail, don’t worry, you only wasted 3 months. Move on to the next game. If you do meet the target of $10,000 revenue, that is awesome! If not, make another one.

You could go to game design school and pay them TENS OF THOUSANDS of dollars per semester to learn how to make games, or you could just release a bunch of small “middle games” and EARN tens of thousands of dollars. When John Romero and id Software released HoverTank and Catacombs 3D they didn’t say “our games failed, we only made $5000!” Instead, they saw it as paid R&D that was equity towards developing Wolfenstein 3D which earned id Software $50,000 per month a year later.

Podcast

Bonus: Designer Notes Podcast. While I was writing this, Soren Johnson published this great interview with Trent Kusters about how his early days working at a company that would churn out games and he talks about how much he learned from that experience

Listen to it here

Failure is ok

Even if the game does “fail,” you shouldn’t be ashamed. Watch this great segment from an interview with fine art critic David Sylvester saying artists “must be allowed to go through bad periods, they must be allowed to do bad work.”

Don’t worry, the rapid, quick, middle game period is just the middle. You will someday make your big grand masterpiece, you just can’t do it right away. 

Stick with it and you can make your masterwork some day.

Final Inspiration 

I want to leave you with one final picture of John Romero in jorts and long sleeve denim (long on the top, short on the bottom) and the most inspirational part of his book:

Start small and get some practice. Try remaking a classic arcade game to get a feel for things before you attempt to make a giant RPG. Carmack, Tom, and I had each been making games for ten years before we started working together. It was another few years before we made DOOM, and that was my ninetieth game (not a typo). Give yourself time to get good, and don’t be dissuaded by setbacks. Game development is like gameplay. Load your save and try again.

John Romero Doom Guy


from Hacker News https://ift.tt/huO0HAx

A couple of messages about changes to ianVisits: Copyright trolls

Hello,

I’ve been contacted a few times recently about the sharp decline in descriptive images on the events listings. A picture really helps to show what an event will be like, and people are asking why there’s been a decline in them — however, a problem has arisen over the past year.

Copyright trolls.

I’ve been walloped recently by a cluster of start-up companies that scan websites looking for photo infringements.

If I’ve made the mistake myself, then I put my hands up, guilty m’lord.

However, the vast majority of my “fines” have been because an event venue has used or supplied an image to me, and they either didn’t have permission themselves, or the license they paid for didn’t allow them to use the photo to promote their event on other websites.

As the publisher, I am held to be liable, even when the photo supplier made the mistake.

Legally, I could go back to the venue and tell them to refund the often circa £400 per incident, but most are charities, so I have been sucking up the cost as it just feels wrong to expect small charities to cover the costs they have unwittingly dumped on me, and can ill afford

However, with bills that have reached the thousands over the past few months alone, the only solution is not to use images on events unless I am convinced the venue has a license to use them. Expecting a small organisation to sign consent forms and the like for every single event listing, often where the marketing is done by a part-time person working a couple of days a week… is just not viable.

So, sorry, but I have to reduce the number of header images on events, even though I know how helpful they are.

The events will keep being listed, so you can keep finding wonderful things to do in London, just with fewer photos.

The risks are just too great.

Help support the ianVisits website

Also, for a bit over a decade, ianVisits has been providing news and listings about what’s happening in London.

While advertising revenue contributes to funding the website’s costs, online advertising is a rapidly shrinking option for websites to rely on. That is why I have a facility with DonorBox where you can support the costs of running the website and the time invested in writing and researching news articles.

Whether it’s a one-off donation or a regular giver, every additional support goes a long way to cover the costs of running the ianVisits website and keeping you regularly topped up with doses of Londony news and facts.

If you like what ianVisits does, then please support the website here.

Thank you

Elizabeth line construction site in East London

HMS Liverpool attends the Ceremony of the Constables Dues at the Tower of London

HS2 tunnel entrance next to the M25 motorway

Preview of the Horizon 22 viewing gallery

Climbing the Dome of St Paul’s Cathedral

A preview of the new Museum of London site

If you like what ianVisits does, then please support the website here.



from Hacker News https://ift.tt/BV9G2wc

Cloudflare launches new AI tools to help customers deploy and run models

Looking to cash in on the AI craze, Cloudflare, the cloud services provider, is launching a new collection of products and apps aimed at helping customers build, deploy and run AI models at the network edge.

One of the new offerings, Workers AI, lets customers access physically nearby GPUs hosted by Cloudflare partners to run AI models on a pay-as-you-go basis. Another, Vectorize, provides a vector database to store vector embeddings — mathematical representations of data — generated by models from Workers AI. A third, AI Gateway, is designed to provide metrics to enable customers to better manage the costs of running AI apps.

According to Cloudflare CEO Matthew Prince, the launch of the new AI-focused product suite was motivated by a strong desire from Cloudflare customers for a simpler, easier-to-use AI management solution — one with a focus on cost savings.

“The offerings already on the market are still very complicated — they require stitching together lots of new vendors, and it gets expensive fast,” Prince told TechCrunch in an email interview. “There’s also very little insight currently available on how you’re spending money on AI; observability is a big challenge as AI spend skyrockets. We can help simplify all of these aspects for developers.”

To this end, Workers AI attempts to ensure AI inference always happens on GPUs close to users (from a geographic standpoint) to deliver a low-latency, AI-powered end-user experience. Leveraging ONNX, the Microsoft-backed intermediary machine learning toolkit used to convert between different AI frameworks, Workers AI allows AI models to run wherever processing makes the most sense in terms of bandwidth, latency, connectivity, processing and localization constraints.

Workers AI users can choose models from a catalog to get started, including large language models (LLMs) like Meta’s Llama 2, automatic speech recognition models, image classifiers and sentiment analysis models. With Workers AI, data stays in the server region where it originally resided. And any data used for inference — e.g. prompts fed to an LLM or image-generating model — aren’t used to train current or future AI models.

“Ideally, inference should happen near the user for a low-latency user experience. However, devices don’t always have the compute capacity or battery power required to execute large models such as LLMs,” Prince said. “Meanwhile, traditional centralized clouds are often geographically too far from the end user. These centralized clouds are also mostly based in the U.S., making it complicated for businesses around the world that prefer not to (or legally cannot) send data out of its home country. Cloudflare provides the best place to solve both these problems.”

Workers AI already has a major vendor partner: AI startup Hugging Face. Hugging Face will optimize generative AI models to run on Workers AI, Cloudflare says, while Cloudflare will become the first serverless GPU partner for deploying Hugging Face models.

Databricks is another. Databricks says that it’ll work to bring AI inference to Workers AI through MLflow, the open source platform for managing machine learning workflows, and Databricks’ marketplace for software. Cloudflare will join the MLflow project as an active contributor, and Databricks will roll out MLflow capabilities to developers actively building on the Workers AI platform.

Vectorize targets a different segment of customers: those needing to store vector embeddings for AI models in a database. Vector embeddings, the building blocks of machine learning algorithms used by applications ranging from search to AI assistants, are representations of training data that are more compact while preserving what’s meaningful about the data.

Models in Workers AI can be used to generate embeddings that can then be stored in Vectorize. Or, customers can keep embeddings generated by third-party models from vendors such as OpenAI and Cohere.

Now, vector databases are hardly new. Startups like Pinecone host them, as do public cloud incumbents like AWS, Azure and Google Cloud. But Prince asserts that Vectorize benefits from Cloudflare’s global network, allowing queries of the database to happen closer to users — leading to reduced latency and inference time.

“As a developer, getting started with AI today requires access to — and management of — infrastructure that’s inaccessible to most,” Prince said. “We can help make it a simpler experience from the get-go … We’re able to add this technology to our existing network, allowing us to leverage our existing infrastructure and pass on better performance, as well as better cost.”

The last component of the AI suite, AI Gateway, provides observability features to assist with tracking AI traffic. For example, AI Gateway keeps tabs on the number of model inferencing requests as well as the duration of those requests, the number of users using a model and the overall cost of running an AI app.

In addition, AI Gateway offers capabilities to reduce costs, including caching and rate limiting. With caching, customers can cache responses from LLMs to common questions, minimizing (but presumably not entirely eliminating) the need for an LLM to generate a new response. Rate limiting confers more control over how apps scale by mitigating malicious actors and heavy traffic.

Prince makes the claim that, with AI Gateway, Cloudflare is one of the few providers of its size that lets developers and companies only pay for the compute they use. That’s not completely true — third-party tools like GPTCache can replicate AI Gateway’s caching functionality on other providers, and providers including Vercel deliver rate limiting as a service — but he also argues that Cloudflare’s approach is more streamlined than the competition’s.

We’ll have to see if that’s the case.

“Currently, customers are paying for a lot of idle compute in the form of virtual machines and GPUs that go unused,” Prince said. “We see an opportunity to abstract away a lot of the toil and complexity that’s associated with machine learning operations today, and service developers’ machine learning workflows through a holistic solution.”



from Hacker News https://ift.tt/WR5LFra

Friday, September 29, 2023

Performance Evaluation of Rust Asynchronous Frameworks (2022)

A Performance Evaluation on Rust Asynchronous Frameworks

14 April 2022 -- Paris.

As we previously mentioned in this blog post, Zenoh is written in Rust and leverages the async features to achieve high performance and scalability. At the present stage, we rely on the async_std framework – a decision that we took after a careful performance evaluation of the frameworks available in late 2019. This framework has proven to be quite effective, allowing Zenoh to reach more than 4M msg/s with 8 bytes payload and over 45Gb/s with 1MiB payload while keeping latency of ~30µsS.

However, async_std development seems to be stalling and the community appears to be moving towards other async frameworks, such as Tokio. As such, we decided to re-evaluate the major Rust async frameworks in order to assess the possibility to move to another framework without compromising our performances.

In this post, we will go through the evaluation of three asynchronous frameworks with respect to how they perform on asynchronous networking. Each of them will be evaluated and compared with the baseline performances provided by the equivalent synchronous primitives provided by the Rust standard library. Namely, we are targeting the following frameworks:

Preparation of the testing environment

The first step toward reproducible results and fair evaluation is a stable and dedicated environment. In other terms, in any benchmarking effort, it is essential to reduce the number of factors that may influence the results of our performance evaluation. This guide effectively summarizes how to properly setup a Linux environment and how to get consistent results. The second recommendation is to have a thorough read of The Rust Performance Book If you, like us, are developing in Rust, we recommend you to go through it since we found it really insightful for what concerns performance tips and tricks along with profiling techniques in Rust. Another nice reference on how to write performant code in Rust is this one.

All the tests below are run on two of our workstations equipped with an AMD Ryzen 5800X @ 4.0GHz, 32 GB of RAM, running Ubuntu 20.04.3 LTS with Kernel 5.4.0-96-generic, connected through a 100Gb Ethernet connection (Mellaxon ConnectX-6 Dx).

Experiment Description

For such evaluation, we concentrate on Round Trip Time (RTT) by building a ping-pong application for each framework. This synthetic benchmark is essential for us as it gives a lower bound on the achievable latency as well as its behavior under “async contention”. The Round Trip Time (RTT) is the amount of time it takes for a message to be sent plus the amount of time it takes for the acknowledgment of that message being received.

The picture below illustrates the ping-pong application, and how the RTT is computed.

rtt

Our RTT tests are provided here. So, you can check what we actually used to get the RTT results and replicate it yourself!

Looking at the results

In the following, we are presenting the RTT results for all the frameworks under two different scenarios: over localhost and over the network.

Localhost

In our first series of tests, the ping-pong application is executed on a single machine, leveraging only localhost communication.

RTT

To replicate these experiments, you can build and run the RTT test by following these instructions:

$ git clone https://github.com/ZettaScaleLabs/rust-async-net-eval.git
$ cd rust-async-net-eval.git
$ make

# ---- RTT in localhost ----
# run all the tests in localhost
$ ./run-localhost.sh -asStP

# parse the results
$ python3 parse.py -d latency-logs -k rtt -o localhost-latency.pdf -l 0

One very important aspect to mention is that RTT depends on the load of the system. As you can see from the figure below, as the number of messages per second increases, RTT decreases. This is due to the fact that when messages are sent at a low rate, the processes are more likely to be de-scheduled by the operating system. This operation adds additional latency since the processes need to be rescheduled when messages are sent and received. This is true for both the Rust code and the classical ping, which is reported as a reference baseline for RTT.

The x-axis of the figure below shows the number of messages that we configured to be sent in one second, from a single message to 1 million and beyond. The inf case represents the scenario where messages are sent back-to-back as fast as possible. In such a backlogged scenario, we can see that Rust latency is as little as 5 µs for the standard library. The payload size of each message is 64 bytes, the same as standard ICMP.

localhost

Over the network

It is also interesting to see the behavior over a real physical network, as the asynchronous frameworks should take advantage of real blocking I/O operations, such as sending messages over the network. In this case, we used two workstations, one running the ping and the other one running the pong.

net-100gbe

Adding CPU bounded computing

But Zenoh does not only send data, but it also has a set of CPU-bound tasks, like looking up a forwarding table, de/serializing messages, and so on. To this extent, it is interesting to validate how such frameworks perform when interleaving the I/O tasks with come computing-intensive tasks.

A Zenoh peer runs two separate tasks for each session with other Zenoh peers, so we modified the ping-pong applications to spawn a number of tasks that mimics those compute-intensive tasks. In our tests we range from 10 to 1000 tasks, mimicking from 5 to 500 “zenoh sessions”, figures below illustrate the different results.

Localhost

10 tasks

local-10

1000 tasks

local-1000

Over a 100GbE network

In this series of tests the ping and the pong applications are running on two different machines, leveraging the 100GbE network connectivity, varying the number of computing tasks.

10 tasks

100gbe-10

100 tasks

100gbe-100

1000 tasks

100gbe-1000

Conclusions

Our evaluation shows async_std and smol are quite close to the standard library and outperform it on some workloads. On the other hand, Tokio seems to reach very soon its limit ~18µs with 100 msg/s and it shows no differences between TCP and UDP. Additionally, Tokio seems to be adversely impacted by the CPU-bound (Rust) asynchronous tasks. Based on these results, we believe that we have no choice but remain on async-std. That said, it would be interesting to understand why Tokio exposes such behavior under contention and also to improve its raw performance to close the gap with async_std. As it stands, Tokio introduces 8µs additional latency in localhost and 10µs over the network.

Ideally, we would like to see one async framework becoming the “standard”, but to do so we can’t ignore raw performance. We look forward to engaging and working with the rest of the community to help make this happen.



from Hacker News https://ift.tt/qyFdzEl

Show HN: PDF Debugger – Inspect Structure of PDF Files

Comments

from Hacker News https://pdf.hyzyla.dev/

Techies are paying $700 a month for tiny bed ‘pods’ in downtown San Francisco

Some of Brownstone’s “pods” at the firm’s SoMa location. They’re 4 feet tall and fit a twin mattress.

Courtesy of Brownstone

To rent in the Bay Area is to compromise; many of us share apartments and go without wishlist amenities to keep costs down. But techies in downtown San Francisco are taking it a few steps further.

Startup founders are paying $700 a month to stay in bed “pods” — tiny, semi-open boxes that only fit a single twin mattress and stand just 4 feet tall — according to media reports this week. The pods, made of steel and wood with a blackout curtain at one end, are arranged in a two-high, 14-long grid; residents share five bathrooms and a few common spaces, but don’t have a full kitchen or any laundry machines.

Brownstone, the startup that designed the pods and runs the bed rentals, is still setting up the space in San Francisco’s SoMa neighborhood, CEO James Stallworth told SFGATE. The company brought in its first customers in June, he said, adding to the firm’s footprint across the Bay Area — Brownstone rents out pods in Palo Alto ($800 a month), San Jose ($650), and Bakersfield ($500).

Advertisement

Article continues below this ad

It’s a high price for living in a 4-foot-tall rectangular prism, but Stallworth said every one of the SoMa location’s 28 beds will be taken in October. Pods come with utilities included, month-to-month contracts and no security deposit. The Bay Area’s incredibly costly housing market is part of what’s driving that demand; Zillow has San Francisco’s median rent for studios at $2,205, or $655 above the national median. Budget renters in the city often opt to live with roommates, split rent with a partner and find cheaper offerings on Craigslist or Facebook Marketplace.

The artificial intelligence trend, it seems, is also bringing in the pod-dwellers. Stallworth said his company doesn’t sort potential residents based on the jobs they do, but AI-interested renters have been particularly prevalent. “Going into the house, we just knew there were a lot of reasons to be in San Francisco,” Stallworth said. “It turns out AI is currently what a lot of people are doing.”

Christian Lewis, the founder of the nascent AI company Spellcraft, posted about living in one of the pods on X, formerly known as Twitter, on Sept. 16. He wrote that he wanted to live in the city “without paying $4,000 a month or getting stabbed,” and called the pod a “great solution.”

Advertisement

Article continues below this ad

Lewis told ABC7 that he’d just moved from Illinois, and that in a few days in the shared living space, he met “some of the smartest people I’ve met in my entire life. That’s the reason I came and that’s the reason why I’m staying. That’s the reason why I’m living in a pod.” 

ABC7 showed Mayor London Breed a photo of the living space — she appeared to buy into the minimalist approach.

“You do what you can when you know you have a product that is going to make it so that you don’t necessarily have to live at a place like that for the rest of your life,” Breed told the station.

Breed and prominent San Francisco business leaders like Salesforce CEO Marc Benioff have continuously expressed hope that AI startups will bring workers back into San Francisco’s offices. Anthropic announced a $4 billion investment from Amazon on Monday; the AI firm is planning to sublease a 230,000-square-foot office from Salesforce, the San Francisco Chronicle reported. (The Chronicle and SFGATE are both owned by Hearst but have separate newsrooms.)

Advertisement

Article continues below this ad

As for the future of Brownstone, Stallworth said he’s hoping other housing providers will start using the company’s design and model as an answer to homelessness. It is working to spiff up the SoMa location and considering a slight rebrand, due to what Stallworth called “the negative connotations of ‘pods’ from science fiction.”

“I think ‘private beds’ is a better descriptive term,” he said.

Hear of anything happening at a Bay Area tech company? Contact tech reporter Stephen Council securely at stephen.council@sfgate.com or on Signal at 628-204-5452.



from Hacker News https://ift.tt/JyAQV4B

The fastest route to a climate turnaround is also less expensive

Undertaking a program of action against climate change ten years from now is almost as expensive as getting started on a more ambitious effort to stop climate change today, according to a new study.

The results emerge from the first large-scale, comprehensive effort to simulate climate futures that also takes uncertainty into account.

Climate change itself as well as the political, economic, technological measures might deploy against it are highly uncertain. We don’t know exactly when certain green technologies will become available, what they will cost, how sensitive the global climate will be to a given change in greenhouse gas emissions, or the backdrop of global population growth and economic development against which this will all play out.

Most studies of future climate change and decarbonization efforts compare a small number of scenarios across several different computer models, with each scenario representing the average of an array of possibilities – which can lead to a false sense of precision.

In the new study, researchers instead analyzed a large number of scenarios within one massive model of the global climate, economy, and energy system. They used a supercomputer to crunch 700 gigabytes of data and simulated 4,000 different scenarios for 10 regions of the world through the year 2100. The analysis took into account 18 different sources of uncertainty and involved adjusting 72,000 different variables for each scenario.

 

 

“This study does not predict the future,” says study team member Evangelos Panos, an energy systems modeller at the Paul Scherrer Institute in Switzerland. “It creates a data map made up of what-if decision pathways based on understanding existing uncertainties to help stakeholders and policymakers make decisions on climate action.”

Some of the scenarios assumed that current social, economic, and technological trends will continue more or less as usual. Some envisioned a major push to decarbonize the global economy in line with limiting warming to 1.5 °C in the year 2100, others a slightly less ambitious push that would result in 2 °C of warming in 2100. The remaining scenarios explored what might happen if the push to limit warming to 2 °C was delayed for a decade.

Of the 4,000 scenarios, 70% suggest that the global temperature increase will exceed 1.5 °C in the next 5 years, the researchers report in the journal Energy Policy. (Many of the scenarios involve an “overshoot” of global temperature benchmarks – but emissions cuts and carbon removal technologies would bring the temperature back down by the end of the century.)

“The study highlights the urgency for immediate policy action for mitigation and adaptation,” Panos says.

Decarbonizing the global economy is likely to be very expensive: on average, the 1.5 °C scenarios require an investment of $8 trillion in 2030 and $26 trillion in 2050.

Delaying climate action is cheaper in the short term – but the costs of limiting warming to 2 °C with action starting ten years from now are similar to the costs of limiting warming to 1.5 °C with action starting today, the researchers found.

Delay also “would risk higher stranded assets in energy supply and use and irreversible climate change damages as in more than 55% of those scenarios with delayed action the temperature increases by more than 2 °C in 2050,” Panos adds.

The study suggests that policymakers need to put more emphasis on electrifying technologies throughout the economy – cooking, heating of buildings, industrial processes, transportation – rather than just focusing on greening the electric grid.

Yet none of the 4,000 scenarios relies on one singular technology or approach to tackling climate change. “It suggests that we need a portfolio approach in energy supply and use, with all low- and zero-carbon options, as well as adaptation measures, being developed together,” Panos says.

Source: Panos E. et al.  “Deep decarbonisation pathways of the energy system in times of unprecedented uncertainty in the energy sector.” Energy Policy 2023.

Image: Pixabay.



from Hacker News https://ift.tt/pVb0N4v

There were no humans at all in New Zealand until about 1250AD

Comments

from Hacker News https://ift.tt/VmgO40y

Nvidia Offices Raided by French Law Enforcement

NVIDIA France offices were raided by French law enforcement and antitrust authorities over suspected engagement of anti-competitive practices.

NVIDIA France Is Suspected of Engaging In "Anticompetitive Practices" According To French Law Enforcement, Offices Raided

In a report by Bloomberg, it is highlighted that the French antitrust authorities have raided a business that was suspected to have been engaged in anticompetitive practices.

The Wall Street Journal was able to confirm that the specific company whose offices were raided is NVIDIA on a suspicion that the company may have its hands involved in anti-competitive practices within the graphics card sector. NVIDIA isn't cited by name and the company has declined to make any statement on the matter, reports Bloomberg.

“Raids do not presuppose the existence of a breach of the law,” France’s competition authority said in a statement on its website, “which only a full investigation into the merits of the case could establish, if appropriate.”

via Bloomberg

WSJ further adds that the French authorities showed concern about NVIDIA's dominance within the tech sector which could exclude smaller businesses and startups. The company is now a subject of inquiry.

There's currently limited information regarding what actually happened and the suspected anti-competitive practices might be nothing more than just a false alarm. NVIDIA has recently gained lots of traction in the GPU segment, and more importantly the AI ecosystem. The green team is rolling in the Trillions and demand for their GPUs is at an all-time high which has prompted the competition such as AMD & Intel to fill up the space that NVIDIA can't fulfill due to its inability to meet the huge AI chip orders.

AI sure has propelled NVIDIA into the big leagues but there are now more eyes watching the green team as it moves forward. This might be the start of several such cases that we might see in the future or just one such instance that blows away in thin air.

News Source: Yahoo Finance!

Share this story

Facebook



from Hacker News https://ift.tt/r3t9WaQ

Thursday, September 28, 2023

Costco is selling gold bars and they are selling out within hours

Costco is well-known as a place to get bargain prices on any variety of items, from food to luggage to appliances to gold bars.

Wait, gold bars?

Yes, the retail warehousing giant is your one-stop shop for 1 ounce gold PAMP Suisse Lady Fortuna Veriscan bars, handsomely detailed and ready for purchase.

They're available for the bargain price of … well, you have to be a member to know that, but apparently they were selling for a little shy of $1,900 recently, according to chatter on Reddit. Spot gold most recently was going for $1,876.56 an ounce as of Wednesday afternoon.

Regardless of the price, gold is selling like hotcakes, judging by comments Tuesday from Costco Chief Financial Officer Richard Galanti. Speaking on the company's quarterly earnings call, Galanti said the bars are in hot demand and don't last long when in stock.

"I've gotten a couple of calls that people have seen online that we've been selling 1 ounce gold bars," he said. "Yes, but when we load them on the site, they're typically gone within a few hours, and we limit two per member."

Costco is selling 1 ounce gold bars.

Costco

A couple of important points from that thought: The bars indeed are only available online, and only if you're a Costco member, which costs either $120 or $60 a year, depending on which program you pick. The retailer also is limiting the purchases to two to a customer, meaning it would be pretty hard to build a position that would lead to financial security.

At the very least, though, it's an effective promotion and one that could appeal to a certain sector of Costco's shopping clientele, said Jonathan Rose, co-founder of Genesis Gold Group.

Rose noted that the company seems to have accelerated its offerings of dried foods and other survivalist goods at a time when worries about the future are running high. For example, the company markets a 150-serving emergency food preparedness kit that could come in handy, you know, just in case. Gold meshes with that type of product.

"They've done their market research. I think it's a very clever way to get their name in the news and have some great publicity," he said. "There is definitely a crossover of people living off the land, being self-sufficient, believing in your own currency. That's the appeal to gold as a safe haven as people lose faith in the U.S. dollar."

Stock Chart IconStock chart icon

Gold futures, last 5 years

Precious metals have been on a run over the past several years. Gold has risen more than 15% over the past year and more than 55% over the past five years.

With inflation still elevated, banks under the gun from a regulatory standpoint and looming issues in the commercial real estate market, the safe-haven aspect of gold and silver should be strong, Rose said.

"We know what the road map looks like: Bank failures, commercial loans defaulting at an alarming rate … they don't seem to have a handle on inflation, and that's why they keep raising interest rates," he said. "The outlook for stability in the market isn't good and people want a [tangible] asset that's going to be a safe haven. That's what gold and silver provide."

The hoarding of gold bars is a hot topic lately after U.S. Sen. Bob Menendez of New Jersey was indicted on federal bribery charges and 81.5 ounces in bullion were seized from his home.



from Hacker News https://ift.tt/wzMY0y2

AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model

[Submitted on 27 Sep 2023]

Title:AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model

Download a PDF of the paper titled AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model, by Seungwhan Moon and 12 other authors

Download PDF
Abstract:We present Any-Modality Augmented Language Model (AnyMAL), a unified model that reasons over diverse input modality signals (i.e. text, image, video, audio, IMU motion sensor), and generates textual responses. AnyMAL inherits the powerful text-based reasoning abilities of the state-of-the-art LLMs including LLaMA-2 (70B), and converts modality-specific signals to the joint textual space through a pre-trained aligner module. To further strengthen the multimodal LLM's capabilities, we fine-tune the model with a multimodal instruction set manually collected to cover diverse topics and tasks beyond simple QAs. We conduct comprehensive empirical analysis comprising both human and automatic evaluations, and demonstrate state-of-the-art performance on various multimodal tasks.

Submission history

From: Seungwhan Moon [view email]

[v1] Wed, 27 Sep 2023 22:50:51 UTC (31,162 KB)



from Hacker News https://ift.tt/BgfDqrh