Roadrunner is a reusable, vertical take-off and landing (VTOL), operator-supervised Autonomous Air Vehicle (AAV) with twin turbojet engines and modular payload configurations that can support a variety of missions.
Roadrunner-M is a high-explosive interceptor variant of Roadrunner built for ground-based air defense that can rapidly launch, identify, intercept, and destroy a wide variety of aerial threats — or be safely recovered and relaunched at near-zero cost.
The massive worldwide focus on electric vehicle technology has already introduced noteworthy changes to automobile design, notably due to the flat floor design with one large battery pack popularized by Tesla and thereafter adopted by nearly every other major automaker. The reduction in size of drive components has made the term "frunk" part of our regular vernacular, as well. And now Hyundai and Kia have introduced the world to what they say is the next major paradigm shift in EV architecture: the Universal Wheel Drive System, or Uni Wheel for short.
Basically, the Korean automaker conglomerate has figured out a way to replace the standard constant velocity (CV) joint with an arrangement of gears that reside directly inside the wheel hub. We could do our best to explain the arrangement — the short version: a single sun gear transfers power through a pair of packaged pinion gear arms that can move independently to an outer ring gear in a unique and flexible planetary design — but it makes the most sense after looking at the pictures and especially the animated video that we've embedded just below.
Hyundai/Kia highlight many benefits to this Uni Wheel drive system. Claims include reduced packaging size, improved ride quality, greater durability and, importantly, increased efficiency. All of that means a smaller battery could provide similar range to today's EVs, that a similarly sized battery could provide greater range, or even that the space that's freed up by Uni Wheels would allow greater battery size and therefore range without requiring a larger vehicle platform.
In addition to electric cars and trucks, the automaker group says that Uni Wheels can be used for a large variety of mobility applications. These would include vehicles with fewer than four wheels, like scooters and motorcycles, and even wheelchairs and delivery robots. The automakers have reportedly applied for and registered eight patents related to Uni Wheel in South Korea as well as the United States and Europe.
Many years ago, I did a brief stint at Google. A lot has changed since then, but even that brief exposure to Google's internal tools left a lasting impression on me. In many ways, the dev tools inside Google are the most advanced in the world. Google has been a pioneer not only in scaling their own software systems but in figuring out how to build software effectively at scale. They've dealt with issues related to codebase volume, code discoverability, organizational knowledge sharing, and multi-service deployment at a level of sophistication that most other companies have not yet reached. (For reference, see Software Engineering at Google.)
In other ways, however, Google's internal tools are awfully limited. In particular, nearly all of them are tightly coupled with Google's internal environment. Unfortunately, that means you can't take them with you when you leave.
The Google diaspora has seeded so many other organizations with amazing talented people who bring lessons learned from working inside one of the world's leading technology organizations. But adapting to programming outside of Google can be tough, especially when you've come to rely on tools you no longer have at your disposal.
Over the years, I've learned from my own experience and the experience of lots of others who have left Google. Many of Sourcegraph's early customers began with an Xoogler missing code search after leaving Google. I worked closely with these people to understand the gap they were trying to fill, so that we could build Sourcegraph to meet their needs. Over time, patterns emerged in terms of how ex-Googlers sought to introduce new dev tools into their organizations, inspired by their experience with dev tools at Google. Some were successful and others were not.
I thought it would be helpful to write a guide to dev tools outside of Google for the ex-Googler, written with an eye toward pragmatism and practicality. No doubt many ex-Googlers wish they could simply clone the Google internal environment to their new company, but you can't boil the ocean. Here is my take on where you should start and a general path I think ex-Googlers can take to find the tools that will make them—and their new teams—as productive as possible
If you recently left Google to join another company, you probably have this overall sense of frustration that you're not as productive as you used to be. You feel you need to change something, but what is it? As a first step, you should think about what you do day to day and identify where the pain is coming from.
Both inside Google and out, the software development lifecycle looks something like this:
You think of a feature you want to build or a bug you need to fix.
You read a bunch of code, design docs, and ask colleagues questions. You're building an understanding of the problem and how the solution will roughly fit into the existing system.
You start writing code. You aim first for something just working. Maybe you go back and look up docs or read some more code several times while you're doing this.
You have it working, but you're not ready to ship. You've broken some tests, which you now fix, you add some more tests, you refactor the code to make it cleaner and easier for the next person to understand. You push it up to a branch. You wait for CI to run and maybe you implement a couple of additional fixes and small improvements.
You submit the patch for review. Your teammates leave some comments. You make the changes. Maybe there are a few rounds of back-and-forth before the reviewer(s) approve the change.
You merge the patch and it gets deployed.
Monitoring systems that are in place will determine if there are any production issues. If it's your patch that caused an outage, you're on the hook for fixing it.
At every stage in this process, there is typically a tool that anchors the developer experience. Tools shape your work cycle and have an immense impact on your productivity.
To improve your productivity, you need to find better tools. There's a useful GitHub repository that identifies nearly every single tool inside Google and its closest comparables outside of Google: https://github.com/jhuangtw/xg2xg. This list is comprehensive, but a bit overwhelming. So where do you start?
In your first month, don't try to change anything. Just listen and learn the ropes.
As a new member of the team, you likely don't have the influence or authority to change all the tools your team uses. Moreover, you also lack knowledge—knowledge of how and why your new team behaves the way it does and why it uses its current set of tools. Simply copy-pasting Google internal tools is not necessarily going to work for your new team. So learn what is working for your new team along with what isn't.
Low-hanging fruit
I believe that code search is usually a great tool to start with. Yes, I am the cofounder of a code search company, so of course I would say that, but here are my reasons—if these don't resonate with you, then I recommend skipping to the next section!
It's one of the tools ex-Googlers usually miss the most from their everyday lives.
You can try different code search engines on your own and validate that one is good before sharing it with others. This means you don't need to obtain approval from gatekeepers or expend precious social capital convincing others to try a tool you haven't started using yourself.
It won't require forcing others to change existing habits, because your new team doesn't yet have a code search tool. If they do, then it's either bad and they don't use it much, or it's good and you already have good code search, so skip this section!
If your new company has more than a few teams in the organization, you're likely dealing with more code that you can reasonably grok as an individual person. And even if you're working in a much smaller company, chances are you're pulling in a ton of open-source code through your dependencies. This is all code that you will need to dive into at some point, when building a new feature or tracing the source of some critical bug.
Given the scale of code that nearly every developer has to deal with these days, it's no wonder that the lack of code search can quickly slow your development velocity to a crawl.
When evaluating code search engines, there are a few things to consider:
Query language: Regular expressions are table stakes. You want to ensure the code search query language is both expressive and easy to use. Literal searches should be intuitive, and more advanced pattern matching should be available.
Scale: Ensure the code search engine scales to the size of your codebase. If your codebase is more than a few gigabytes, a key thing to look for is whether the code search engine uses a trigram index https://swtch.com/~rsc/regexp/regexp4.html, because this is how you get regular expression matching working at the scale of a large codebase.
Code browsing: As a user of Google Code Search, you know that search is only half the story. Once you click through to a result, you want to be able to click around to jump to definitions and find references as easily as if you had checked out the code and set up the dev environment in your IDE. * Without great code browsing, you'll be context-switching between your editor and code search engine frequently.
Permissions: If your company enforces codebase permissions, you should consider whether your code search engine respects those.
Overall cost: Consider both the price of the code search engine and the maintenance overhead of keeping the tool online.
Here are the common code search engines we've seen in use:
OpenGrok: a fairly old but persistent code search engine now maintained by Oracle
Hound: a code search engine created and open-sourced by engineers at Etsy
Livegrep: a code search engine created by Nelson Elhage at Stripe
Another good early area to target is monitoring. Every engineer at some point has to deal with a production issue. Production is a very different place than development—you can't just set a breakpoint or add a printf and see the effect in seconds. Making updates to production is expensive along multiple dimensions: compute resources, developer time, and worst of all, pain to your users and customers.
Deployment has changed a lot in the past 5-10 years. Microservices, Kubernetes, moving to the Cloud—these are big shifts in how companies deploy software. Many companies have adopted these new paradigms and technologies, but they have not yet updated their monitoring infrastructure to make debugging in these new production environments easy.
Fortunately, there have been some great new open-source tools and companies in recent years that have vastly improved the state of monitoring and observability in the world outside of Google.
Prometheus is a time-series metrics tracker and visualizer that's similar to Borgmon. It lets you instrument your application to track metrics like CPU utilization, error rate, and p90 latency that change over time.
Grafana is a dashboarding tool similar to Viceroy. A common situation is to connect Grafana with Prometheus, so you can construct a single-page view of a bunch of key metrics that indicate overall application health.
Google pioneered distributed tracing, an essential tool for increasingly common multi-service architectures, with Dapper. One of the creators of Dapper, Ben Sigelman, went on to start Lightstep. Distributed tracing is now a feature of many monitoring systems, including paid offerings like Honeycomb and Sentry and open-source projects like Jaeger, built by Uber engineers.
Monitoring is a bit trickier to introduce than code search, since it has to be integrated into the production environment. This often involves changing the deployment environment, which probably means persuading the team that controls the deployment environment. It may also entail adding instrumentation code, which involves submitting patches to the various teams that own the code being instrumented. However, it is still easy in the sense that introducing a new tool doesn't require anyone to change existing habits. People are free to use the new tool or not, which eliminates a strong source of objections when you're trying to get the tool first deployed.
After you're in good standing: code review
Introducing code search and monitoring doesn't require asking anyone on the team to change existing workflows. Changing the code review tool, however, does.
Chances are that if you'd been at Google for awhile, the way that code review is done outside of Google has struck you as a little weird. GitHub Pull Requests is the most common code review tool, but ex-Googlers usually have a few complaints about it:
It is not straightforward and sometimes not possible to view changes made since the last round of reviews. The easy path only lets you review the entire outstanding diff.
There is no support for stacked CRs.
The entire diff across all files in the changeset is displayed as one giant page, and it's hard to keep track of what hunks you've reviewed.
GitHub PRs are very unopinionated about how reviews should be done. Without adding additional third-party integrations, the review process can seem loosey-goosey, and even with third-party integrations, it still may lack the ability to enforce finer grained review and sign-off policies.
There is limited fuzzy jump-to-def or find-references for certain languages, but it is nowhere near the level that Critique supports inside of Google.
The closest thing to Critique you can get outside of Google is Gerrit. Gerrit began life as a fork of Rietveld, which itself was an open-source fork of Google's original code review tool, Mondrian. It should therefore feel very familiar, as it descends from a line of tools that were created to support the exact way that Google does code review.
Phabricator is another code review tool that ex-Googlers often prefer to GitHub Pull Requests. Phabricator began life as Facebook's internal code review tool and was subsequently open-sourced and released to the outside world. There's also a company behind it, Phacility, that offers hosted instances and support, in case you don't want to be on the hook for maintaining your own instance.
Another tool worth looking into is Reviewable, created by ex-Googler Piotr Kaminski. Unlike Gerrit or Phabricator, it is cloud-only, but may offer the code review experience that's closest to what is done inside Google today.
When selling the benefits of Gerrit, Phabricator, or Reviewable to the rest of your team, it's important to identify the existing pains the team is feeling with their existing code review tool. Here are how some common pain points are addressed by switching from a GitHub-Pull-Request-like tool to a Gerrit-like tool:
Gerrit facilitates a more structured review process, with explicit sign-offs, which can be good if you've grown the team and want to enforce stricter review policies across the organization.
Gerrit makes reviewing larger diffs easier, because you can go file by file, view changes since the last round of review, and stack CRs. This enables faster, more thorough reviews.
Gerrit, Phabricator, and Reviewable let you closely approximate the general review flow that you had inside of Google, but one thing that neither provides is code intelligence. If you're missing code intelligence in your current code review tool or find the GitHub PR code intelligence lacking, try the Sourcegraph browser extension. This connects to a Sourcegraph instance to provide tooltips, jump-to-def, and cross references during code review. It works with GitHub PRs, GitLab MRs, Phabricator, and Bitbucket Server. Support for Gerrit is on the way.
When you're ready to slay the dragon
The most intractable part of the software development life cycle is often CI and the build system. This is because understanding the build often involves understanding every piece of the overall codebase in a fairly nuanced way. Speeding up the build is something that various people try to do over time, and so the build code accrues a growing set of hacks and optimizations until the point is reached where the number of people who actually understand enough about what is going on to make a change with zero negative consequences is very small.
In short, the build system is often a big giant hairball, and one that you should be wary of trying to disentangle before you pick off the lower hanging developer productivity fruit. It may be tempting to tackle this earlier, because Blaze was worlds better than what you're using now and Google has even helpfully open-sourced a derivative of Blaze called Bazel. But Bazel is not Blaze—for one, it lacks a massive distributed build cluster that comes free alongside it—and the world outside of Google is not Google.
Bazel is not a silver bullet. When Bazel was first released, many open-source projects in the Go community switched to using it in favor of the standard Go build tool. However, within a year, many of these switched back due to complexity of use, unfamiliarity in the rest of the Go community, and the fact that builds seemed to actually be slower with Bazel. Since then, there have been major improvements to Bazel's support for Go, but you need to rigorously evaluate what improvements you'll see if you switch over to it.
In order to do this rigorous evaluation, you'll want to have plenty of other great dev tools already in place. In particular, you'll want to have great code search, so you can actually dive into the build scripts in various parts of the codebase and understand their ins and outs. You'll also want to have a great code review tool in place, because changing the build system is going to be a complex change that requires approval from many different engineering teams.
Once you're ready to slay the dragon, you should understand there are a number of build tools in addition to Bazel that are designed to enable scalable builds in large codebases. These include:
Pants, created by an ex-Googler originally for Twitter and Foursquare
Please, a newish build tool created by ex-Googlers, heavily inspired by Blaze
There's also YourBase, which is not a build tool, but a CI service started by ex-Googler Yves Junqueira to bring super-fast and scalable builds to the world outside of Google, independent of what underlying build tool is used.
Operating like a Xoogler
Google prioritizes developer experience and developer tools in a way unlike most other companies. Googlers and ex-Googlers have the benefit of firsthand experience of using first-class dev tools that add a huge amount of leverage to their natural talents and abilities.
One of your competitive advantages after leaving Google will be to apply these experiences to bring great new dev tools into your new organization to boost your own productivity and the productivity of your teammates. By using these tools to spread effective best practices for developing software at scale, you can bring one of Google's key competitive advantages—the effectiveness of its engineering organization—to your new company.
Building software at scale is hard. As everyone who has read The Mythical Man Month knows, you can't get better software just by hiring more engineers. You need better tools. Just as software is a multiplier for the productivity of end users, dev tools are a multiplier for the productivity of the people who build the software. So if you truly believe in your new company's mission, make it one of your priorities to use your special knowledge as an ex-Googler and bring them the best developer tools available.
Try out Sourcegraph. Get started searching your code by self-hosting Sourcegraph – free up to 10 users. Or try Sourcegraph Cloud to easily search public, open source code.
Please feel free to ask us questions either via posting on Twitter@sourcegraph or sending an email to [email protected].
About the author
Beyang Liu is the CTO and co-founder of Sourcegraph. Beyang studied Computer Science at Stanford, where he published research in probabilistic graphical models and computer vision at the Stanford AI Lab. You can chat with Beyang on Twitter @beyang or our community Discord.
Salvatore Sanfilippo, aka antirez released a telegram bot library in C recently. While I'm not at all interested in a telegram bot library, I always like to read Salvatore's code, which is uniformly superb.
The bit of code that caught my eye in this release is in this file, which implements a simple selector library on top of the json library he's using. In about 100 lines of code, it implements a function that parses and executes a tiny jq-like language:
/* You can select things like this: * * cJSON *json = cJSON_Parse(myjson_string); * cJSON *width = cJSON_Select(json,".features.screens[*].width",4); * cJSON *height = cJSON_Select(json,".features.screens[4].*","height"); * cJSON *price = cJSON_Select(json,".features.screens[4].price_*", * price_type == EUR ? "eur" : "usd");
It's a really nice example of just how simple you can make a parser for a tiny language.
I think it'd be fun to re-implement this in another language to understand it better, and I might give it a go at some point.
Before the end of the year, John James “Jimbo” Fisher Jr., the recently fired head football coach at Texas A&M University, will receive a $19.4 million contract buyout payment. Then, over the next eight years, he’ll receive annual $7.27 million installments until his $77 million contract is fulfilled.
That’s a college coaching buyout record, and Fisher isn’t alone in getting a hefty paycheck sans working. According to ESPN, American colleges and universities have spent more than $500 million on fired coaches between 2010 and 2021. Since 2022, the total has increased by $70 million, not including Fisher’s.
As you read in the headline: I (Schabi) have lost access to about €6,400 worth of bounties on https://ift.tt/7ndTYPa.
Background
Bountysource is a website which acts as a platform for the placement of bounties for open source development. People can use it to post bounties to fix certain issues or develop certain features.
Developers can take on these challenges, implement the changes, and get the bounty awarded to them. This also happened in the NewPipe project, where bounties were handled by Team NewPipe. I had set up the account for Team NewPipe on Bountysource, which accumulated a significant sum of money over the years.
Bountysource ownership had a bit of a tumultuous history, but the present owner seems to be The Blockchain Group, a company based in France.
Current situation for me
I can attest that their back-end has stopped working, and they have stopped responding to communication attempts. My e-mail to them sent on 7 November, 2023 has not been answered.
Initially, I thought that this was too significant a sum to simply accept as a loss. Therefore, I reached out to a lawyer. However, the costs of hiring one, added to the court fees for initiating the proceedings, are too high to warrant pursuing this, however; not to mention the significant time I would have to spend with a legal case, which I cannot spare right now.
Why so much unclaimed money
Team NewPipe consists of volunteers, meaning we spend whatever free time we find during our days on NewPipe. This means that making a one-time decision like “Sure, we will accept money via platform X” will boil down to someone doing just that, and the decision then gets published. This happened with Bountysource, and it was used by quite a few people.
However, actually deciding on what to do with that money is a totally different challenge. Just paying it forward to active developers would have worked (after finding out a way to split it fairly), but we did not want to get paid for working on a passion project, and we didn’t want to attract developers who would contribute solely for the money.
We could have also used it to fund non-monetary rewards (maybe an online course to get better at programming, or paying for a faster computer to speed up development), but this meant an additional effort to decide who gets what, how it gets there, et cetera. So nothing really happened with the money.
To make a long story short: there has been a lot of discussion on how to improve the situation (e.g. issue #2732), and we ended up choosing to donate all money towards an association.
Therefore, we actually founded said association, targeted at projects like NewPipe. This association can now take money from donations, and spend it on clearly stated goals in an open and accessible way. Sadly, this again being a team effort of volunteers, we have not been able to publish an online presence yet, so you have to bear with us on that front.
Since we now have a bank account to forward the money to, I wanted to access the Bountysource money as well, and stumbled into this situation, where we are stuck as of now.
I can’t offer any condolences, sadly. This is really just a notification to everyone who thought they donated to Team NewPipe via Bountysource: that money is probably lost and didn’t reach us. Sorry.
Despite the media industry’s efforts to reach young audiences, one demographic remains particularly underserved: kids. Apart from a handful of stand-out examples like the BBC’s long-running Newsround or Vogue’s spinoff edition Teen Vogue, there aren’t many news products aimed specifically at children and teenagers.
Topo, a Paris-based illustrated news magazine, sees this as an opportunity. “People tend to think that kids and teenagers aren’t interested in news, but there are plenty of ways to get young people interested in current affairs,” says Laurence Fredet, who has been co-editor-in-chief at Topo since it launched in 2016.
Topo’s strategy is simple: tell the news through comic strips. The magazine, whose print edition comes out every other month, publishes news articles exclusively in a graphic-novel format. It presents itself as being “for those under 20 years old (and others),” but its core target is 13- to 15-year-olds.
“Using comics gives us the time and space to provide context that young people need for each story,” says Fredet. “Plus, we can often convey that information in a playful or interesting way.”
Each article is the result of a collaboration between a print journalist and a cartoonist, who Fredet and Cadène commission and pair from a pool of freelancers.
“The journalist is not a cartoonist and the cartoonist is not a journalist. Both are very aware of their limits, but also of the knowledge they bring to the team,” explains Fredet. “So they really ‘ping-pong’ from the text to the visual elements to put the storyboard together.”
The editors also pay particular attention to their audience’s needs when commissioning and editing articles.
“We have to keep in mind that kids and teenagers will have very different reference points than adult readers. For instance, our readers weren’t alive when [the attacks of] 11 September 2001 happened, and they may not know who Catherine Deneuve is,” Fredet explains. “So we always aim to place ourselves in their world and not make assumptions.”
The tone is key, too. “I don’t want to make ‘positive news,'” says Fredet, who doesn’t like when people sugar-coat things for kids. “At the same time, we have a real responsibility toward our young readers to not completely depress them.”
That’s where Topo’s comic-strip format offers a real advantage. By relying on drawings and speech bubbles rather than photographs and text, the magazine can discuss difficult topics in visual and engaging ways, without veering into the dull, the frightening, or the graphic.
“For instance, we did a piece on how teenagers in Saudi Arabia try to flirt with each other despite social restrictions. Some of those testimonies were very sad,” Fredet says. “But thanks to the graphic-novel format, we were able to tell these stories in a lighter way, even with a bit of humor, while staying true to what they said and keeping their quotes verbatim.”
The magazine has recently covered Putin’s censorship of journalists in Russia, how clothing brands use logos to attract young consumers, how the climate crisis is threatening the survival of small island states, and why teenagers tend to have bad body odor.
A digital PDF version of both magazines is available, but online is not the team’s focus.
“Our comics don’t read well on phones or tablets, because the cartoonists design their storyboards with a full-page view in mind, with interacting left and right panels,” Fredet says. “We offer a PDF version, but sales for those are tiny.”
“In the future, we’d like to explore the possibility of making the magazines phone-friendly, but that would require a big digital transformation project,” says Ricard. “You’d need to rework the layout of each comic, box by box, and we don’t have the means to invest in that at the moment.”
Both magazines also stick to a firm “no ads” policy to ensure complete editorial independence: “It’s very important to us to offer ad-free independent journalism, so we are financed only by our readers,” Fredet says.
Until recently, this model had allowed Topo to break even and be self-sufficient within the group. But as the price of paper has almost doubled in the last year, production costs have soared and the magazine is struggling to stay afloat.
The strategy going forward is to focus on institutional subscriptions, which can both offer a stable revenue stream and reach more readers via classrooms and school libraries.
“Ideally, our goal would be to get 8,000 schools around France subscribed to the magazine,” Fredet says. “That’s what we’re working on now. It takes time because we are a very small team, but I believe there is enormous room for growth.”
Priscille Biehlmann is a program producer for the leader development programs at the Reuters Institute for the Study of Journalism, where this piece originally ran.
In recent years, awareness of the importance of reproducible research and open science has grown in the research community. The importance of conducting robust, transparent, and open research has especially been highlighted by the reproducibility crisis, or credibility revolution (Baker, 2016; Errington et al., 2021; Vazire, 2018). Reproducible and open science practices increase the likelihood that research will yield trustworthy results, and facilitate reuse of methods, data, code, and software (Chan et al., 2014; Diaba-Nuhoho and Amponsah-Offeh, 2021; Downs, 2021; Ioannidis et al., 2014). Across fields, definitions of ‘reproducible’ and ‘open’ may vary. While some fields use the terms interchangeably, in other fields ‘reproducible’ includes elements of scientific rigor and research quality, whereas ‘open’ simply refers to making research outputs publicly accessible. Overall, these practices seek to improve the transparency, trustworthiness, reusability, and accessibility of scientific findings for the research community and society (Barba, 2018; Claerbout and Karrenbach, 1992; Nosek et al., 2022; Parsons et al., 2022; Wolf, 2017). Examples of specific practices include sharing of protocols, data and code, publishing open access, implementing practices such as blinding and randomization to reduce the risk of bias, engaging patients in designing and conducting research, using reporting guidelines to improve reporting, and using CRediT authorship statements to specify author contributions. Despite these developments, reproducible research and open science practices remain uncommon in many fields (Blanco et al., 2019; Grant et al., 2013; Hardwicke et al., 2022; Hardwicke et al., 2020; Page and Moher, 2017).
According to a survey by the European University Association (EUA) for 2020–2021, 59% of the 272 European institutions surveyed rated open science’s strategic importance at the institutional level as very high or high (Morais et al., 2021). The strategic importance of open science has also been recognized by policy-makers, for example by the UNESCO Recommendations on Open Science (UNESCO, 2021). Unfortunately, these values are not reflected in the current research assessment system. ‘Classic’ research assessment criteria, such as the Journal Impact Factor or the h-index, are still being used to assess the contribution of individual researchers. The use of these biased metrics should be discouraged, however, as they ignore the value of other research outputs (e.g. protocols, data, code) and are not useful for assessing the impact and quality of individual research contributions (https://sfdora.org/read/). Initiatives such as COARA seek to reform research assessment criteria to recognize a broad range of activities that contribute to high quality research (https://coara.eu/). These reforms are essential to incentivize reproducible research and open science practices.
In addition to shifting incentives, effective education and training programs that teach reproducible research and open science skills have not yet been implemented across research fields. Researchers in various disciplines are discussing whether these concepts apply, and how they might be implemented. To explore these ideas, German Reproducibility Network members organized a virtual brainstorming event (see Box 1) to discuss strategies for making reproducible research and open science training the norm at research institutions in Germany and beyond.
Virtual unconference format
In March 2022, 96 participants, consisting mostly of members of initiatives and organizations belonging to the German Reproducibility Network (GRN) and other researchers based in Germany, took part in the virtual brainstorming event. Participants came from a variety of professional backgrounds (e.g. academic researchers, administrators, library and information science professionals), career stages (from graduate students to senior group leaders), and disciplines (e.g. psychology, biomedical sciences). The virtual brainstorming event unconference format has been explained previously (Holman et al., 2021). Supplementary file 1 provides details of this specific event. This paper shares lessons learned from two days of intensive discussions, through virtual networking events, virtual meetings, and asynchronous conversations on an online discussion board.
The first section of this paper provides a brief overview of eleven strategies that were derived from the event. Members of the research community can implement these strategies by taking action in their roles as instructors, researchers, supervisors, mentors, members of curriculum or hiring and evaluation committees, or as part of institutional leadership, research support or administrative teams. The section also highlights actions that institutions can take to support these activities, by allocating resources and monitoring impact. The second section of this paper lists a few tips for implementing several strategies. Cited resources provide additional insights for those interested in pursuing specific strategies. While making reproducible and open science training the norm might involve major changes at institutions, this journey starts with small steps towards reproducible and open science practices. Changing norms will require a broad coalition; hence, we hope that this piece inspires others to join this effort, while encouraging those who are already engaged to think creatively about opportunities to enhance the impact of their work.
MirageOS uses the OCaml language, with libraries that provide networking, storage and concurrency support that work under Unix during development, but become operating system drivers when being compiled for production deployment.
The framework is fully event-driven, with no support for preemptive threading.
Last month, thousands of people in New Hampshire took to social media to report an explosion in the sky that was strong enough to rattle windows. Naturally aliens were blamed by some, while cooler heads theorized it may have been a sonic boom from a military aircraft. But without any evidence, who could say?
Luckily for concerned residents, this was precisely the sort of event Harvard’s Galileo Project was designed to investigate. Officially described as a way to search for “technological signatures of Extraterrestrial Technological Civilizations (ETCs)”, the project keeps a constant watch on the sky with a collection of cameras and microphones. With their gear, the team was able to back up the anecdotal reports with with hard data.
As explained in a recent article on The Debrief written by project head [Avi Loeb], none of Galileo’s optical equipment captured anything interesting at the time. But it’s acoustic monitoring, omni-directional system (AMOS), which records from the infrasonic all the way to ultrasonic (specifically, 0.05 hertz to 190 kilohertz), got an earful during the 12-second event.
[Avi] was able to take the data collected by AMOS and run it through the Taylor–von Neumann–Sedov solution, which was originally developed during World War II to estimate how much energy was released during the detonation of a nuclear bomb from the spherical blast-wave it produced.
By plugging the amplitude and duration of the pressure wave into the equation, he calculated an energy release of approximately 2.4 kilotons of TNT at a distance of about 40 kilometers (25 miles).
Since the detonation of a tactical nuclear weapon within a 40 km radius of Mount Washington would likely have been noticed by somebody, the likely culprit would therefore be some object entering the Earth’s atmosphere.
As it turns out, the Orionid meteor shower was just about at its peak in the skies over Massachusetts at that point. Given the average velocity of these particular meteors (66 km/s), [Avi] figures the source of the sound was a space rock of about a meter in diameter meeting its fiery end.
We know, aliens would have been more fun. But this is precisely the sort of grounded research that needs to happen for Unidentified Anomalous/Aerial Phenomena (UAP) to be taken seriously. We hadn’t heard of the Galileo Project before this, but will certainly be keeping an eye on its findings going forward.
CSS Grid is one of the most amazing parts of the CSS language. It gives us a ton of new tools we can use to create sophisticated and fluid layouts.
It's also surprisingly complex. It took me quite a while to truly become comfortable with CSS Grid!
In this tutorial, I'm going to share the biggest 💡 lightbulb moments I've had in my own journey with CSS Grid. You'll learn the fundamentals of this layout mode, and see how to do some pretty cool stuff with it. ✨
CSS is comprised of several different layout algorithms, each designed for different types of user interfaces. The default layout algorithm, Flow layout, is designed for digital documents. Table layout is designed for tabular data. Flexbox is designed for distributing items along a single axis.
CSS Grid is the latest and greatest layout algorithm. It's incredibly powerful: we can use it to build complex layouts that fluidly adapt based on a number of constraints.
The most unusual part of CSS Grid, in my opinion, is that the grid structure, the rows and columns, are defined purely in CSS:
With CSS Grid, a single DOM node is sub-divided into rows and columns. In this tutorial, we're highlighting the rows/columns with dashed lines, but in reality, they're invisible.
This is super weird! In every other layout mode, the only way to create compartments like this is by adding more DOM nodes. In Table layout, for example, each row is created with a <tr>, and each cell within that row using <td> or <th>:
Unlike Table layout, CSS Grid lets us manage the layout entirely from within CSS. We can slice up the container however we wish, creating compartments that our grid children can use as anchors.
We opt in to the Grid layout mode with the display property:
By default, CSS Grid uses a single column, and will create rows as needed, based on the number of children. This is known as an implicit grid, since we aren't explicitly defining any structure.
Here's how this works:
Implicit grids are dynamic; rows will be added and removed based on the number of children. Each child gets its own row.
By default, the height of the grid parent is determined by its children. It grows and shrinks dynamically. Interestingly, this isn't even a “CSS Grid” thing; the grid parent is still using Flow layout, and block elements in Flow layout grow vertically to contain their content. Only the children are arranged using Grid layout.
But what if we give the grid a fixed height? In that case, the total surface area is divided into equally-sized rows:
By default, CSS Grid will create a single-column layout. We can specify columns using the grid-template-columns property:
Code Playground
Result
By passing two values to grid-template-columns — 25% and 75% — I'm telling the CSS Grid algorithm to slice the element up into two columns.
Columns can be defined using any valid CSS <length-percentage> value, including pixels, rems, viewport units, and so on. Additionally, we also gain access to a new unit, the fr unit:
Code Playground
Result
fr stands for “fraction”. In this example, we're saying that the first column should consume 1 unit of space, while the second column consumes 3 units of space. That means there are 4 total units of space, and this becomes the denominator. The first column eats up ¼ of the available space, while the second column consumes ¾.
The fr unit brings Flexbox-style flexibility to CSS Grid. Percentages and <length> values create hard constraints, while fr columns are free to grow and shrink as required, to contain their contents.
Try shrinking this container to see the difference:
In this scenario, our first column has a cuddly ghost that has been given an explicit width of 55px. But what if the column is too small to contain it?
Percentage-based columns are rigid, and so our ghost image will overflow, spilling out of the column.
fr-based columns are flexible, and so the column won't shrink below its minimum content size, even if that means breaking the proportions.
To be more precise: the fr unit distributes extra space. First, column widths will be calculated based on their contents. If there's any leftover space, it'll be distributed based on the fr values. This is very similar to flex-grow, as discussed in my Interactive Guide to Flexbox.
In general, this flexibility is a good thing. Percentages are too strict.
We can see a perfect example of this with gap. gap is a magical CSS property that adds a fixed amount of space between all of the columns and rows within our grid.
Check out what happens when we toggle between percentages and fractions:
Notice how the contents spill outside the grid parent when using percentage-based columns? This happens because percentages are calculated using the total grid area. The two columns consume 100% of the parent's content area, and they aren't allowed to shrink. When we add 16px of gap, the columns have no choice but to spill beyond the container.
The fr unit, by contrast, is calculated based on the extra space. In this case, the extra space has been reduced by 16px, for the gap. The CSS Grid algorithm distributes the remaining space between the two grid columns.
What happens if we add more than two children to a two-column grid?
Well, let's give it a shot:
Code Playground
Result
Interesting! Our grid gains a second row. The grid algorithm wants to ensure that every child has its own grid cell. It’ll spawn new rows as-needed to fulfill this goal. This is handy in situations where we have a variable number of items (eg. a photo grid), and we want the grid to expand automatically.
In other situations, though, we want to define the rows explicitly, to create a specific layout. We can do that with the grid-template-rows property:
Code Playground
Result
By defining both grid-template-rows and grid-template-columns, we've created an explicit grid. This is perfect for building page layouts, like the “Holy Grail”? layout at the top of this tutorial.
Let's suppose we're building a calendar:
CSS Grid is a wonderful tool for this sort of thing. We can structure it as a 7-column grid, with each column consuming 1 unit of space:
This works, but it's a bit annoying to have to count each of those 1fr’s. Imagine if we had 50 columns!
Fortunately, there's a nicer way to solve for this:
The repeat function will do the copy/pasting for us. We're saying we want 7 columns that are each 1fr wide.
Here's the playground showing the full code, if you're curious:
Code Playground
Result
By default, the CSS Grid algorithm will assign each child to the first unoccupied grid cell, much like how a tradesperson might lay tiles in a bathroom floor.
Here's the cool thing though: we can assign our items to whichever cells we want! Children can even span across multiple rows/columns.
Here's an interactive demo that shows how this works. Click/press and drag to place a child in the grid:
The grid-row and grid-column properties allow us to specify which track(s) our grid child should occupy.
If we want the child to occupy a single row or column, we can specify it by its number. grid-column: 3 will set the child to sit in the third column.
Grid children can also stretch across multiple rows/columns. The syntax for this uses a slash to delineate start and end:
At first glance, this looks like a fraction, ¼. In CSS, though, the slash character is not used for division, it's used to separate groups of values. In this case, it allows us to set the start and end columns in a single declaration.
It's essentially a shorthand for this:
There's a sneaky gotcha here: The numbers we're providing are based on the column lines, not the column indexes.
It'll be easiest to understand this gotcha with a diagram:
Confusingly, a 4-column grid actually has 5 column lines. When we assign a child to our grid, we anchor them using these lines. If we want our child to span the first 3 columns, it needs to start on the 1st line and end on the 4th line.
Alright, time to talk about one of the coolest parts of CSS Grid. 😄
Let's suppose we're building this layout:
Using what we've learned so far, we could structure it like this:
This works, but there's a more ergonomic way to do this: grid areas.
Here's what it looks like:
Like before, we're defining the grid structure with grid-template-columns and grid-template-rows. But then, we have this curious declaration:
Here's how this works: We're drawing out the grid we want to create, almost as if we were making ASCII art?. Each line represents a row, and each word is a name we're giving to a particular slice of the grid. See how it sorta looks like the grid, visually?
Then, instead of assigning a child with grid-column and grid-row, we assign it with grid-area!
When we want a particular area to span multiple rows or columns, we can repeat the name of that area in our template. In this example, the “sidebar” area spans both rows, and so we write sidebar for both cells in the first column.
Should we use areas, or rows/columns? When building explicit layouts like this, I really like using areas. It allows me to give semantic meaning to my grid assignments, instead of using inscrutable row/column numbers. That said, areas work best when the grid has a fixed number of rows and columns. grid-column and grid-row can be useful for implicit grids.
There's a big gotcha when it comes to grid assignments: tab order will still be based on DOM position, not grid position.
It'll be easier to explain with an example. In this playground, I've set up a group of buttons, and arranged them with CSS Grid:
Code Playground
Result
In the “RESULT” pane, the buttons appear to be in order. By reading from left to right, and from top to bottom, we go from one to six.
If you're using a device with a keyboard, try to tab through these buttons. You can do this by clicking the first button in the top left (“One”), and then pressing Tab to move through the buttons one at a time.
You should see something like this:
The focus outline jumps around the page without rhyme or reason, from the user's perspective. This happens because the buttons are being focused based on the order they appear in the DOM.
To fix this, we should re-order the grid children in the DOM so that they match the visual order, so that I can tab through from left to right, and from top to bottom.
In all the examples we've seen so far, our columns and rows stretch to fill the entire grid container. This doesn't need to be the case, however!
For example, let's suppose we define two columns that are each 90px wide. As long as the grid parent is larger than 180px, there will be some dead space at the end:
We can control the distribution of the columns using the justify-content property:
If you're familiar with the Flexbox layout algorithm, this probably feels pretty familiar. CSS Grid builds on the alignment properties first introduced with Flexbox, taking them even further.
The big difference is that we're aligning the columns, not the items themselves. Essentially, justify-content lets us arrange the compartments of our grid, distributing them across the grid however we wish.
If we want to align the items themselves within their columns, we can use the justify-items property:
When we plop a DOM node into a grid parent, the default behaviour is for it to stretch across that entire column, just like how a <div> in Flow layout will stretch horizontally to fill its container. With justify-items, however, we can tweak that behaviour.
This is useful because it allows us to break free from the rigid symmetry of columns. When we set justify-items to something other than stretch, the children will shrink down to their default width, as determined by their contents. As a result, items in the same column can be different widths.
We can even control the alignment of a specific grid child using the justify-self property:
Unlike justify-items, which is set on the grid parent and controls the alignment of all grid children, justify-self is set on the child. We can think of justify-items as a way to set a default value for justify-self on all grid children.
So far, we've been talking about how to align stuff in the horizontal direction. CSS Grid provides an additional set of properties to align stuff in the vertical direction:
align-content is like justify-content, but it affects rows instead of columns. Similarly, align-items is like justify-items, but it handles the vertical alignment of items inside their grid area, rather than horizontal.
To break things down even further:
justify — deals with columns.
align — deals with rows.
content — deals with the grid structure.
items — deals with the DOM nodes within the grid structure.
Finally, in addition to justify-self, we also have align-self. This property controls the vertical position of a single grid item within its cell.
There's one last thing I want to show you. It's one of my favourite little tricks with CSS Grid.
Using only two CSS properties, we can center a child within a container, both horizontally and vertically:
The place-content property is a shorthand. It's syntactic sugar for this:
As we've learned, justify-content controls the position of columns. align-content controls the position of rows. In this situation, we have an implicit grid with a single child, and so we wind up with a 1×1 grid. place-content: center pushes both the row and column to the center.
There are lots of ways to center a div in modern CSS, but this is the only way I know of that only requires two CSS declarations!
In this tutorial, we've covered some of the most fundamental parts of the CSS Grid layout algorithm, but honestly, there's so much more stuff we haven't talked about!
If you found this blog post helpful, you might be interested to know that I've created a comprehensive learning resource that goes way deeper. It's called CSS for JavaScript Developers.
The course uses the same technologies as my blog, and so it's chock full of interactive explanations. But there are also bite-sized videos, practice exercises, real-world-inspired projects, and even a few mini-games.
If you found this blog post helpful, you'll love the course. It follows a similar approach, but for the entire CSS language, and with hands-on practice to make sure you're actually developing new skills.
It's specifically built for folks who use a JS framework like React/Angular/Vue. 80% of the course focuses on CSS fundamentals, but we also see how to integrate those fundamentals into a modern JS application, how to structure our CSS, stuff like that.
If you struggle with CSS, I hope you'll check it out. Gaining confidence with CSS is game-changing, especially if you're already comfortable with HTML and JS. When you complete the holy trinity, it becomes so much easier to stay in flow, to truly enjoy developing web applications.
And for the next week, it's 50% off for Black Friday! You can learn more here:
Why does it feel like everybody at OpenAI has lost their mind?
In what's arguably turning into the hottest AI story of the year, former OpenAI CEO Sam Altman was ousted by the rest of the company's nonprofit board on Friday, leading to a seemingly endless drama cycle that's included hundreds of staffers threatening to quit en masse if the board doesn't reinstate him.
A key character in the spectacle has been OpenAI chief scientist and board member Ilya Sutskever — who, according to The Atlantic, likes to burn effigies and lead ritualistic chants at the company — and appears to have been one of the main drivers behind Altman's ousting.
But clearly, he almost immediately regretted his decision in a bizarre attempt to save face just days later.
"I never intended to harm OpenAI," he tweeted Monday morning, not long after Microsoft, which owns a 49 percent stake in the company, offered Altman a CEO position. "I deeply regret my participation in the board's actions."
The entire situation is baffling. OpenAI, which is quickly approaching a $90 billion valuation, has been thrown into a deepening crisis by its overarching non-profit arm.
In the meantime, we're getting a closer than ever peek at what makes OpenAI's power players tick. Case in point, Sutskever has established himself as an esoteric "spiritual leader" at the company, per The Atlantic, cheering on the company's efforts to realize artificial general intelligence (AGI), a hazy and ill-defined state when AI models have become as or more capable than humans — or maybe, according to some, even godlike. (His frenemy Altman has long championed attaining AGI as OpenAI's number one goal, despite warning about the possibility of an evil AI outsmarting humans and taking over the world formany years.)
Still, the Atlantic's new details are bizarre, even by the standards of tech industry wackadoos.
"Feel the AGI! Feel the AGI!" employees reportedly chanted, per The Atlantic, a refrain that was led by Sutskever himself.
The chief scientist even commissioned a wooden effigy to represent an "unaligned" AI that works against the interest of humanity, only to set it on fire.
In short, instead of focusing on meaningfully advancing AI tech in a scientifically sound way, some board members sound like they're engaging in weird spiritual claims.
Sutskever's strange behavior may also help explain at least some of this weekend's chaos.
At the time of writing, we still don't know exactly where OpenAI and Microsoft stand. In a Monday afternoon tweet, Altman argued that "Satya and my top priority remains to ensure OpenAI continues to thrive," referring to Microsoft CEO Satya Nadella, a vague statement that can be interpreted in several different ways.
For now, we can only sit and watch as the chaos unfurls. Given the rollercoaster over the last couple of days, it's more than likely we'll see a shuffle of OpenAI's non-profit board — if the company even lives to tell the tale, that is.
There's a good chance that the board members who united to boot Altman last week drank just a little too much of the AGI Koolaid and got spooked by the possibility that humanity was hurtling toward the singularity (or heck, maybe they were right to think that!)
Or was it just plain-old locker room hostility and internal rivalries, with colleagues failing to see eye to eye?
It's a perplexing and riveting story that seemingly has even more twists and turns than HBO's hit TV drama "Succession."
OpenAI’s interim Chief Executive Officer Mira Murati plans to re-hire her ousted predecessor Sam Altman and former President Greg Brockman in a capacity that has yet to be finalized, according to people with direct knowledge of the matter.
Murati, who was installed on Friday after the board fired Altman, is negotiating with Adam D’Angelo, the CEO of Quora Inc., who is acting as a representative of the OpenAI board, said the people, who asked not to be identified because the negotiations are private and remain fluid. Brockman, who was also a board member, quit in protest hours after Altman’s departure.
Staph infections in hospitals are a serious concern, so much so that the term Methicillin-resistant Staphylococcus aureus (MRSA) is as commonly known as MRI. Far less known is that in many of these cases, patients are infecting themselves.
In heart surgeries and knee and joint-replacement procedures, up to 85 percent of staph infections after surgery come from patients’ own bacteria, according to a 2002 study in the New England Journal of Medicine.
Despite the threat that staph bacteria pose to patients, there is no uniformly accepted procedure to reduce surgical-site infections in the United States. Now, a team of researchers led by the University of Iowa is recommending guidelines that will cut the infection rate by 71 percent for staph bacteria and 59 percent for a broader class of infectious agents known as gram-positive bacteria. In a paper published Thursday (June 13) in the British Medical Journal, the researchers recommend three steps to reduce post-surgical staph infections:
• Swab patients’ noses for two strains of staph (MRSA and MSSA) before surgery.
• For the 30 percent of patients who have staph naturally in their noses, apply an anti-bacterial nose ointment in the days before surgery.
• At surgery, give an antibiotic specifically for MRSA to patients who have the MRSA strain in their noses; for all others, give a more general antibiotic.
Marin Schweizer, an assistant professor of internal medicine at the UI and the lead author on the BMJ paper, notes the nose ointment costs around $20 a tube and is usually covered by health insurance. “We now know we can target staph where it exists naturally in some patients, which is in the nose,” she says. “That’s the bull's-eye, and we can wipe it out. What we are recommending is a really simple, cheap solution to a big problem.”
The group is now testing the protocol at 20 community hospitals nationwide, including the UI Hospitals and Clinics, as well as 10 Veterans Affairs health care centers, including the one in Iowa City. The VA is funding the study.
The recommendations come from the team’s review of 39 studies of various surgical-site infection practices employed at hospitals nationwide. Many of the individual studies involved small patient samples, and thus were not statistically significant. By combining studies with similar treatment practices and analyzing the outcomes from other studies with different treatments, the UI-led team found a best approach and a large enough sample to make it statistically significant.
“The combination matters, and the treatment being in a bundle matters, too,” says Schweizer, whose primary appointment is in the Carver College of Medicine. “By putting it all together in one care bundle, that one checklist, it becomes standard operating procedure for every hospital.”
Three in ten people in the U.S. unwittingly carry staph in their noses, where they reside benignly as the alpha bacterium in a warm, moist olfactory world. While harmless in the nose, staph can wreak major havoc if introduced within the body, such as a wound healing from surgery. In fact, the researchers found that 78 percent to 85 percent of surgical-site infections involving staph come from the patients’ own bacteria. In those cases, the infecting agents were traced to bacteria in the patients’ noses by comparing the DNA profile of the bacteria at the surgical site with those in the patients’ noses. Most likely, people touched their noses and then touched the wound, freeing the bacteria to roam.
Those post-surgery staph infections mean pain, personal and financial, with two studies estimating treatment to cost between 40,000 and $100,000, most of it due to follow-up surgeries.
Despite the risks and repercussions, the team found that 47 percent of hospitals reported in a survey that they don’t use the nose ointment for staph carriers.
Contributing authors from the UI include professors Eli Perencevich and Loreen Herwaldt. Research assistants Jennifer McDanel, Jennifer Carson, and Michelle Formanek, all from the UI, also contributed to the work, along with Barbara Braun and Joanne Hafner, from The Joint Commission in Oakbrook Terrace, Ill.
The U.S. Department of Health and Human Services funded the study, after Schweizer and her colleagues responded to the department’s call for proposals to reduce surgical-site infections.
Toronto - The next time your friend displays remarkable openness to their opposite political camp’s ideas, you might try pinching them.
Okay, we don’t really recommend that. But new evidence shows that people with increased sensitivity to pain are also more likely to endorse values more common to people of their opposite political persuasion. It doesn’t stop there. They also show stronger support for the other camp’s politicians, and, get this -- more likely to vote for Donald Trump in 2020 if they are liberal, or Joe Biden if they are conservative.
Even lead researcher Spike Lee, an associate professor of marketing at the University of Toronto’s Rotman School of Management, who is cross appointed to the University’s Department of Psychology, was pinching himself at the revelations.
“We were honestly not expecting to see this kind of cross-aisle effects of pain sensitivity,” said Prof. Lee, who started thinking about the research ideawhile enjoying theoral freezingexperienceinhisdentist’s chair. “When we first found it, we thought it might be a fluke. That’s why we ran a replication study. We found it again. We ran extended replications and follow-up studies. We kept finding it.”
The connection is perhaps not so surprising considering that we experience pain – whether it’s the physical pain of stubbing our toe or the social pain of getting bulldozed in a political argument – in a similar part of the brain. We can also experience pain vicariously by witnessing other people’s distress or perceiving a social injustice.
Prof. Lee and his research colleague, psychology graduate student Cecilia Ma, ran seven different studies with more than 7,000U.S.participants to test competing theories of what pain sensitivity does to our perceptions of moral and political threats–does it heighten them across the board, only affect threats to the sensibilities we personally hold dear, or make us more sensitive to somebody else’s?
To gauge pain sensitivity, they used a validated self-report instrument called the Pain Sensitivity Questionnaire, as well asaskedparticipantsabout their political orientationandconducted an assessmentof thefoundationsof their moral outlook.
Liberals with higher pain sensitivity showed greater affinity for typically conservative moral values such as loyalty and authority. Pain-sensitive conservatives meanwhile showed more support for values such as care and fairness, usually associated with liberals. The pattern continued when participants were asked about their 2020 voting intentions and their support for Democrat and Republican politicians.
So, along with being quicker to yelp “ouch!” does that mean the pain-sensitive are also confused about their own political orientation? Dr. Lee cautions that “it’s not that their profile of moral sensitivities shifts from ‘only supporting our side’ to ‘only supporting the other side.’ Instead, they tend to be more supportive of both sides’ views.”
While the research doesn’t give a clear solution for how to find middle ground in a politically polarized society, it does shed light on a previously unexplored influence on our moral and political views. Far from being purely rational, most people’s views “are infused with moral feelings, with emotional reactions to what’s right and wrong,” said Prof. Lee. “The better we understand the bases of a person’s moral feelings, the better we can explain and predict their political views.”
Bringing together high-impact faculty research and thought leadership on one searchable platform, the Rotman Insights Hub offers articles, podcasts, opinions, books and videos representing the latest in management thinking and providing insights into the key issues facing business and society. Visit www.rotman.utoronto.ca/insightshub.
The Rotman School of Management is part of the University of Toronto, a global centre of research and teaching excellence at the heart of Canada’s commercial capital. Rotman is a catalyst for transformative learning, insights and public engagement, bringing together diverse views and initiatives around a defining purpose: to create value for business and society. For more information, visit www.rotman.utoronto.ca
-30-
For more information:
Ken McGuffin Manager, Media Relations
Rotman School of Management
University of Toronto
E-mail: mcguffin@rotman.utoronto.ca
Earlier this year, I asked Sam Altman whether decisions made by OpenAI’s leaders might one day lead to unemployment among the masses. “Jobs are definitely going to go away, full stop,” he told me. He couldn’t have known then that his would be among the first. In a blog post released this afternoon, OpenAI—the artificial-intelligence juggernaut for which Altman was the CEO—announced that he would be leaving, effective immediately, because, according to the statement, “he was not consistently candid in his communications with the board.”
The statement did not specify the nature of Altman’s alleged misrepresentations, but they must have concerned serious matters to merit such a dramatic and public rebuke. Altman did not reply to multiple texts seeking comment, but in a post on X (formerly Twitter), he said that he’d loved his time at OpenAI, and that it was transformative for him “personally, and hopefully for the world a little bit.”
The suddenness of this announcement, and the Icarus-like fall it represents for Altman, is difficult to overstate. In 2015, Altman convened a now-famous dinner at the Rosewood Sand Hill, in Menlo Park, California, with Elon Musk and a small group of others, at which they agreed to found OpenAI. Various tech luminaries committed $1 billion to the company, including Musk, who agreed to co-chair its board with Altman. Their partnership lasted only until 2018, when Musk made a play to become the company’s CEO, as reported by Semafor. Altman led the resistance and, a year later, assumed the CEO title for himself.
Altman was internet-famous before founding OpenAI, primarily because of his role as the president of Y Combinator, Silicon Valley’s most prestigious start-up accelerator. But after OpenAI released ChatGPT late last year, he became a global celebrity and embarked on a world tour, meeting with more than 10 heads of state. When I joined him for his swing through East Asia last June, he was mobbed with selfie-seekers everywhere we went. He didn’t shy away from his new, larger-than-life image. He told me, and others, that he imagined AI bringing a new kind of society into being. He said that it would be the “greatest golden age.”
Executives are on their best behavior for reporters, but in speaking with two of Altman’s then–fellow board members, Ilya Sutskever and Greg Brockman, I never heard anything that suggested palace intrigue or even much dissent. Just last week, Satya Nadella, the CEO of Microsoft, the company’s main financial backer, appeared onstage with Altman at the company’s developer day. According to Axios, Microsoft learned about the news only a minute before it was announced, which was one minute earlier than employees at OpenAI reportedly found out. It’s hard to think of an analogue to Altman’s firing; it’d be as if Bill Gates was abruptly shown the door at Microsoft in 1995.
OpenAI has so far stayed ahead of the pack in AI, despite a long and growing list of competitors, including start-ups and tech giants both. In part, it’s done so by remaining largely drama free. That’s over now, and the fallout goes beyond Altman. Brockman, another of the company’s co-founders, announced to OpenAI’s employees that he had resigned, “based on today’s news.” The company’s earlier statement said that he’d retained his position at the company but stepped down as its chairman. Either way, he’s out now too.
Mira Murati, who was formerly the company’s CTO, has been named interim CEO. I’ve sat down with Murati twice, most recently in September, at The Atlantic Festival. Her message was, to my ear, indistinguishable from Altman’s. She told me that OpenAI was pressing forward in trying to build an AI that could transcend human understanding of science. She said that it would be up to society to adapt to this new technology.
But that was before Altman’s ouster. Perhaps Murati will soon articulate some new vision, or perhaps that task will fall to Altman’s permanent replacement. In the meantime, the resonant mystery, the thing that has group texts across Silicon Valley—and, indeed, the world—abuzz with speculation, is what Altman could have done to deserve all this. Was it a colorful indiscretion in his personal life? An internal power play? Did he go rogue in some way? Once we know, we’ll be able to say more about OpenAI’s future, and his.
Though Nasa’s LEW-TOPS-80 patent aligns with this goal, proposing a thermoacoustic engine paired with an alternator to generate electricity in space, it has yet to unveil any prototypes or specific performance metrics.
The Chinese generator was created by the Technical Institute of Physics and Chemistry (TIPC) at the Chinese Academy of Sciences (CAS) and is about 2 metres (6½ feet) in length with a dumbbell-like shape.
It operates with impressive efficiency, according to Professor Hu Jianying of the TIPC.
“The current thermoelectric conversion efficiency is about 28 per cent; with a hotter 600 degree thermal fluid, efficiency could reach 34 per cent,” he said.
Such efficiency can rival that of steam turbines.
Professor Luo Ercang of the TIPC highlighted the generator’s reliability, simple design, few moving parts and compatibility with various heat sources.
“It operates quietly and efficiently, and can use different types of heat, including solar energy, waste heat and biomass,” a CAS statement quoted Lou as saying.
The innovative system comprises a thermoacoustic Stirling engine and a linear motor encased in a rigid shell. The engine converts heat into sound waves that resonate to form a stable sound field. These waves then drive a piston, which in turn generates electricity.
“High-pressure helium at 15 megapascals serves as the working medium, and the absence of mechanical parts needing lubrication means the generator could exceed a decade of lifespan,” Hu said.
While the Stirling engine technology – first developed by Scottish engineer Robert Stirling in 1816 – faces manufacturing challenges due to material requirements for containing high-pressure gas, its low noise and high reliability remain appealing for specialised contexts.
03:42
Taiwan unveils first home-built submarine to fend off possible attack from mainland China
Taiwan unveils first home-built submarine to fend off possible attack from mainland China
Sweden’s use of it in submarines and China’s own advancements with Stirling engines for naval applications underscore the technology’s value. Modern air-independent propulsion (AIP) submarines, including the Chinese navy’s 039A/B type, use Stirling engines as power supplies.
The thermoacoustic generator developed by CAS not only breaks new ground for traditional Stirling engine designs but also integrates a motor that converts sound directly into electrical energy.
Hu noted that the motor’s design avoided harmful vibrations and maintained an airtight seal within the mechanism.
“The linear motor, consisting of a piston driven by sound waves, permanent magnets and coils, contributes to the high conversion efficiency. Its symmetrical design also eliminates some harmful vibrations,” he said.
“The linear motor keeps a very tiny space, about the thickness of a human hair, between the piston and cylinder. This prevents the parts from touching while maintaining the internal airtight environment.”
Despite the lack of an academic paper linked to the CAS announcement, the breakthrough underscores the potential of thermoacoustic Stirling generators to revolutionise power generation in diverse fields.
“It is a promising new generation technology for solar thermal, biomass power generation, and distributed energy systems,” Hu said.