Sunday, February 28, 2021

Is Bitcoin Worth $1M?

Is Bitcoin Worth $1,000,000?

Bitcoin Investment Thesis From First Principles.

Cryptocurrencies have once again entered the public spotlight, following Bitcoin’s recent surge to USD $40,000. Having begun my career trading cryptocurrency derivatives on Wall Street, I naturally am intrigued by crypto-assets as an investment.

Like many, I had always been a skeptic — despite having built quant algorithms to trade Bitcoin. Upon every attempt to uncover objectivity, I am always met with the irrationality of Bitcoin cult followers, sensationalist media or the FOMO of retail traders.

The lack of a first principles investment thesis for Bitcoin is concerning. Without conviction, we are at mercy to our own emotions during Bitcoin’s volatility. It is only when you are confident in your strategy that you can stomach large downturns, and be greedy when others are fearful. Even if you bought at the peak of Bitcoin Mania in 2018, you would still have returned 100% within 3 years had you not sold.

Today I present not from a trader’s perspective, but that of a rational long-term investor. This is not a get-rich-quick technical trading tutorial — they are a fugayzi, fake. I hope this will help you make a better informed decision with respect to incorporating Bitcoin into your portfolio.

Two Major Use Cases for Crypto Tokens

A quick search will reveal a plethora of over a thousand different cryptocurrency coins. These include the most popular cryptocurrency, Bitcoin, and also other ‘alt coins’ such as Ethereum, Litecoin, and Dogecoin. While it may be overwhelming to understand what each crypto token does, there are two main use cases — (1) as a utility protocol, or (2) as a store of value.

Utility protocol tokens exist to distribute limited network resources, allowing users to gain access to features such as smart contracts or payment systems. In order to maintain such a system (also known as the blockchain), it has real world costs in the form of computational power. These tokens are rewarded to the ‘miners’ that maintain the blockchain.

Due to Bitcoin and several other alt coins being finite in supply and robust from hacking, there are arguments made for it to become a potential non-fiat store-of-value asset like gold. What makes cryptocurrencies potentially more special than gold is that it can be also non-sovereign, meaning it is not associated with any country. Commodities like gold are generally sold in USD denominations, thereby giving America too much control over the currency markets.

I will make the case that the only compelling reason to invest in Bitcoin is its potential to emerge as the dominant non-sovereign non-fiat store-of-value. The valuation for it to be a utility protocol is by definition limited, as the purpose of the blockchain is for fast, cheap transactions at scale. It is also likely for the utility protocol market to be fragmented across several crypto tokens, each serving its own niche market segment — just look at how many payment options there are today.

Part 1 — The Value of Cryptocurrency as Utility Protocol Tokens

For any given cryptocurrency protocol, it can be seen as a simplified economy — where utilities can be traded for tokens that are worth some monetary value. At maturity, these tokens do no more than allocate computational resources efficiently.

The Value of Utility Tokens is Tied to its Underlying Costs

The maturity of cryptocurrencies is a reasonable proxy for economic equilibrium, where marginal utility equals to the marginal cost. This means that the value of these cryptocurrency tokens cannot materially decouple from the underlying cost of resources.

If you are still not convinced, let us imagine the following thought experiment. Suppose we have a blockchain that is expensive to use. One of two likely scenarios will occur. In scenario one, competition between miners will occur, as they undercut one another to claim more reward tokens, thereby decreasing the cost. In scenario two, users will create a forked blockchain. This is an identical network, but with a lowered cost. These two scenarios will occur until there are no further arbitrage incentives.

Valuing the ‘Market Cap’ of Utility Tokens

Earlier we alluded that cryptocurrencies are effectively its own micro-economy. The GDP or ‘market cap’ of such an economy can be explained with monetary economics theory.

The money supply or ‘market cap’, M, is simply a function of the aggregate cost of computational resources needed to maintain the blockchain (PQ) divided by the velocity (V) in which the crypto token is being used.

M = PQ/V

where:

  • PQ = the total cost of computational resources consumed (price * quantity)
  • V = the average frequency with which a token is used (velocity)

In the long run, the cost of computational resources (P) is deflationary due to Wright’s Law. Furthermore, we are currently far from the theoretical upper limit of velocity (V), as circulating tokens can potentially wizz around at the speed of computational processing.

These trends ultimately reduce the ‘market cap’ (or M) of a particular utility protocol in the long run. In plain English this means that blockchain technology is bullish for its users, capable of delivering fast, robust, cheap utility services such as payments at scale — it is however bearish for investors in utility tokens. In fact Ethereum’s in-built Gas protocol aims to ensure this.

The King of Utility Protocol Tokens as an Investment

The current leading utility protocol token is Ethereum, with many dApps and protocols built on the Ethereum network. As an investor, it is therefore reasonable to begin an analysis in utility protocols by first valuing Ethereum. All quantitative calculations are located here.

At the day of posting, the daily cost of transactions on the Ethereum network totals about 2400 Ethereum tokens, or about USD $3 million per day. If we make the assumption that the Ethereum network grows 1.5x YoY and the cost of resources decreases by 20% YoY — the ‘market cap’ of Ethereum effectively doubles every year.

Assuming that the velocity of Ethereum is 7, equal to that of the US dollar, then the 10-year projection of total Ethereum is expected to be USD $200 billion. The current total value of Ethereum is USD $158 billion, giving it a mere 26% upside over 10 years. This is simply insufficient return for a speculative asset with inherent long-term deflationary forces.

Part 2 — The Value of Cryptocurrency as Money

Money in a sense, is simply a large debt ledger or excel table. It was invented so that we longer need to barter for specific goods, as money acts as a unit of account for goods or services to be compared against. Money can either act as a store-of-value (e.g. gold and physical cash), or it can be a means of payment (Visa, Paypal, Apple Pay, Cash App, physical cash).

An interesting observation here is that traditionally, only physical cash has been able to act as both a store-of-value and a means of payment. We certainly cannot store value on a credit card, nor can we purchase dinner with a lump of gold.

The Potential for Cryptocurrency as Money

One way to gauge the potential for cryptocurrency as money is to compare it against existing technologies. On a preliminary glance, it appears that cryptocurrencies also have the potential to serve this dual function.

It can be argued that cryptocurrencies is a better store-of-value than physical cash, given that it can never be diluted unlike fiat currencies at a whim. Furthermore it is secure and practically unhackable — if this was not true, Bitcoin would devalue rapidly. As it is not a physical commodity, it poses minimal storage costs (or cost of carry) unlike gold.

As a means of payment, cryptocurrencies are currently lacking in day-to-day transaction functionalities compared to Apple Pay or Google Pay. Nonetheless, they have superiority in certain niche use cases such as with international payments.

Due to the recent support for Bitcoin on popular fintech platforms such as Square’s Cash App or Paypal’s Venmo, the layer of payment frictions is rapidly reducing. This support for Bitcoin serves to evangelise it, acting as a powerful marketing instrument for wider adoption.

The value of a cryptocurrency is effectively derived from its value as a means of payment plus its value as a store-of-value. As payments is simply a special case example of a utility protocol, we have discussed at lengths previously that there is limited value here.

Rather the bulk of a cryptocurrency’s valuation, is its potential to emerge as a dominant non-sovereign non-fiat store-of-value.

The Case for Bitcoin as the Dominant Store-of-Value

An asset is described to be a store-of-value when it is decoupled from the cost of manufacturing and storing it, or its functional utility. Gold is an example of a store-of-value, as it is arbitrarily expensive relative to its utility — most gold is kept as giant inert bullions for no other purpose.

Given Bitcoin is the most popular cryptocurrency with extremely robust features, it is the leading candidate to be the dominant store-of-value. While it can be argued that there may be more than one crypto-assets that can serve this purpose, there is also no real utility in having multiple candidates. We simply just need to look at silver, which is a fraction of the value for gold.

A logical place to start estimating the value for Bitcoin if it successfully becomes the dominant non-fiat store-of-value, is to look at the current standard — gold bullions. Currently there is c.198,000 metric tonnes of gold above ground, valued at USD $11.6 trillion. About 39% of these (USD $4.5 trillion) are held in bullions, split among the private sector and national treasuries.

The value of Bitcoin in 10 years will be some multiple (or fraction) of the total value of gold bullions today. Given that we expect governments to be more prudent, a bearish case in valuation can be 0.25x for national treasuries and 0.75x for the private sector. Similarly, a bullish case might see valuations of 1x for national treasuries and 3x for the private sector.

This gives Bitcoin a value of between USD $130,000 — $530,000 if it is successful in becoming the dominant non-fiat store-of-value. Assigning multiples is the most subjective element of this analysis, and all underlying assumptions are found here.

It should be caveated that having a tremendous upside does not make Bitcoin a good investment, more on this will be covered later in the article. The downside of unsuccessful adoption is effectively a 100% loss. However, a more bullish case may be possible if Bitcoin is widely used to replace unstable sovereign currencies, such as the Venezuelan bolivar.

The Case for Bitcoin to be Part of the International Reserves

Displacing gold bullion may just be the tip of the iceberg, as there is also the potential for Bitcoin to become integrated into the international reserves. There it will act as a non-sovereign, non-fiat, store-of-value asset, or country agnostic.

Currently gold represents about 11% of the USD $12 trillion in total international reserves. The remainder 89% is held as a basket of international fiat currencies, with a large proportion of it held in USD. The disproportionate amount in USD holdings is a result of all major commodities being priced in USD denominations, and America being a significant player in global trades.

With recent trade wars and ongoing economic tensions between countries such as China and the United States, it is not an ideal situation to hold the majority of its reserves in USD. Reducing the amount of USD in international reserves reduces the amount of control America has over countries.

This makes a case for a potential non-sovereign, non-fiat, store-of-value asset like Bitcoin compelling. In the instance that Bitcoin is able to replace 10% — 75% of fiat-currency international reserves, this adds an additional USD $60,000 — $440,000 to the value of Bitcoin.

Adding both scenarios up, if Bitcoin is successful in becoming a non-sovereign, non-fiat, store-of-value asset, its potential valuation in 10 years will be USD $191,000 — $970,000.

Part 3 — Determining if Bitcoin is a Rational Investment

An important aspect of determining if Bitcoin is a rational investment, is to see if it is a positive expected value bet. At my most recent entry point, Bitcoin was valued at USD $10,000. Given the valuation projections made above, I only needed to be right about 1–5% of the time to break even in the bull and bear scenarios respectively.

Given the increased adoption of Bitcoin by major fintech players (Venmo and Cash App) and hedge funds, I think reasonably there is a greater than 5% chance of it succeeding. This not only makes it a positive expected value bet, but one with an asymmetric payoff skew of -1x to 100x. This is akin to purchasing a lottery ticket, that earns you money on average.

At currency valuations of USD $30,000, you need to be right 3% in the bull case and 19% in the bear case. If you think that the odds of Bitcoin successfully becoming a non-sovereign, non-fiat, store-of-value asset within the next 10 years is greater than that, you should invest in Bitcoin.

Implied Success Odds for Breakeven

If you have accepted that Bitcoin’s net expected value is positive for the current price, then you should enter a position. However given that Bitcoin is an extremely speculative asset, then the best way to manage risk is through bet sizing. It should occupy a small proportion of your portfolio (1–5%), and it certainly shouldn’t be bought on margin. For a more technical breakdown on how traders or poker players manage bet sizing, read up on Kelly Criterion.

Conclusion

As a concluding note, if any of the fundamental assumptions change, then you need to update your investment thesis in a similar fashion. An easy way to keep up to date with disruptive innovation news in FinTech is through Konranactionable, 30 sec disruptive ticker stories released twice a week.

Happy investing.

To not miss my future essays, follow me on Twitter.



from Hacker News https://ift.tt/2Pdq3He

Show HN: I created an app to make my online reading experience easier

Comments

from Hacker News https://ift.tt/2Odaoah

152kb WebAssembly interpreter that runs on six OSs with Cosmopolitan

  • Static/dynamic NoFloat modes (integer-only wasm subset)
  • "strace" mode (see d_m3EnableStrace)
  • Update uvwasi to version 0.0.11
  • RawFunction unified/extended with RawFunctionEx
  • Multi-value module parsing
  • MinGW support
  • Python bindings
  • Lots of bugfixes, memory usage improvements, infrastructure updates

You can’t perform that action at this time.

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.


from Hacker News https://ift.tt/37V8YIP

The Univac System [pdf]

Comments

from Hacker News https://ift.tt/2NLbk64

Wikinews mourns loss of volunteer John Shutt

Wikimedia-logo.svg This article mentions the Wikimedia Foundation, one of its projects, or people related to it. Wikinews is a project of the Wikimedia Foundation.

Saturday, February 27, 2021 

On Friday, Wikinews learned Dr John Nathan Shutt, a long-time contributor to both Wikinews and Wikibooks, died on Wednesday. Editing under the name Pi zero, he was at the time the top contributor to Wikinews by edit count, and came third on English Wikibooks. Dr Shutt was 56 years old.

Dr Shutt looking toward the camera

File photo of Dr John Nathan Shutt in 2020.

Image: acagastya.

Dr Shutt's elder sister, Ms Barbara Shutt, informed Wikinews about his death via email on early Friday. His mother, Elsie Shutt had called 9-1-1 emergency services after he had trouble breathing. By the time the ambulance came, Dr Shutt was unconscious. Ms Barbara Shutt also added the doctors operated on him for two hours, but at the end, Dr Shutt died either by blood clots or by a series of heart attacks.

Dr Shutt was the most active editor and administrator on this project and had been contributing as Pi zero since September 2008. He was promoted to administrator in July 2010 and became a reviewer in August 2010. Since then, he has peer-reviewed then published over a thousand news articles on-wiki, the most recent being just a day before his death. He made over 160 thousand edits and over 120 thousand log entries on English Wikinews.

He also held reviewer and administrator privileges on English Wikibooks, having contributed to several wikibooks including Conlang, World Religions, Solar System and The Elements; and created Stacks, a mechanism for sorting the project's content.

Dr Shutt would occasionally write blogs on his blogger called "Structural insight". Dr Shutt was interested in constructed languages (conlangs). He was an avid reader, and enjoyed J. R. R. Tolkien's The Lord of the Rings novels.

In a discussion about Tolkien's works last year, Dr Shutt said, "I read The Hobbit when I was, I think, a teenager. I read it again a few months ago; not sure if I ever read it between those times. It's a wonderfully written story --- by a linguist and, in fact, a conlanger. I've got the Lord of the Rings (the books, I mean), which I've read at least a couple of times over the years. And the Silmarillion, which covers the earliest part of Tolkien's legendarium. Christopher Tolkien, his son who was close to his fantasy writing, is his literary executor and has spent the past half century of his life editing and publishing various of his father's papers. I actually got for christmas... a year ago, I think, The Fall of Gondolin, which Christopher says will be the last of his father's books that he publishes."

Dr Shutt was awarded a PhD in Computer Science in 2011 from the Worcester Polytechnic Institute (WPI), Massachusetts. His research interests included Abstraction theory; the Kernel programming language, a Lisp-based language which he created and was his dissertation topic; Recursive Adaptive Grammars, the core of his master's thesis as well as Self-Modifying Finite Automata which he developed with Roy Rubinstein. He had received his master's degree in 1993, five years after finishing his bachelor's degree, both from WPI. Dr John Shutt was also interested in adaptive grammar as well as category theory. He often programmed in Lisp, enjoyed xkcd comics and used Emacs as his choice of text editor.

He had spent one year at the Brown University for his post-graduate academics. Recalling the experience, Dr. Shutt said, "I spent one year at Brown, but it didn't work. And was a traumatic experience for me; it took me a couple of years to recover enough to make a second try at graduate school." Dr Shutt shared an office with Paul Howard in the 1988/89 academic year at Brown University. In July 2019, Dr Shutt said, "It saddens me that I forgot to wish Paul Howard a happy birthday this year, and he appears to have forgotten to wish me one either. First time we've failed to exchange birthday wishes, even if belatedly, since we were assigned to share an office in the 1988/89 academic year at Brown".

Presentation at BSDCan discussing

fexpr

.

Image: BSDCan.

Andres Navarro and Oto Havle had created an implementation of Kernel programming language, called kernel, which was mentioned in a presentation at BSDCan by Michael MacInnis. Recalling that incident in November, Dr Shutt said, "Two or three years ago, this guy Michael MacInnis emailed me. He was getting ready to give a talk at BSDCan (an annual BSD conference in Canada) about a new UNIX shell he was ready to release, called Oh; and he wanted to know if it was okay if he mentioned my name in regard to fexprs, 'cause my dissertation had come out as he was putting the design together and Kernel-style fexprs fit wonderfully well with his concept so he used them. I assured him I was fine with having my name mentioned. Last night I was watching the video he provides of his talk, which iirc he felt went very well. I've been meaning to learn in more detail how the shell works; it was kind of fascinating to me how it very easily does away with most of Lisp's parentheses despite being fundamentally Lisp. (Cons cells and fexprs. Profoundly Lisp.)".

Dr Shutt's cat lounging in the sunlight

Dr Shutt's cat Pickles in the morning sun (the window door behind him faces due east). His fur color comes out looking different here than in most light.

Image: Juan.

Dr Shutt lived with Asperger's syndrome. In a discussion with one of the Wikimedia volunteers, he said, "As often happens with aspies, I was a hyperlexic kid, some of which has lingered."

Dr Shutt lived in Massachusetts, US, and is survived by his mother Elsie Shutt, his sister and niece Barbara and Hannah Shutt, his cat Pickles and his brother David Shutt. Dr Shutt would have turned 57 next Friday.


Sister links

Sources

External links



from Hacker News https://ift.tt/3pYAv23

How do I make newer Unity games backwards-compatible with OS X 10.9 “Mavericks”?

The crash log indicates that the game is looking for a function called getattrlistbulk. Because this function doesn't exist in Mavericks, the game doesn't know what to do, and crashes. Ergo, in order for the game to run, we'll have to give it what it wants—a copy of the getattrlistbulk function.

(In other words, we need to write some code. Make sure you have Apple's Xcode Command Line Tools installed!)

getattrlistbulk is a part of the kernel, which means I definitely don't understand what it does. But, what if I don't have to—what if Unity games don't actually need getattrlistbulk for anything important, and/or have a fallback codepath for unexpected values? If that was the case, it might be that getattrlistbulk doesn't need to actually do anything for the game to run, it merely needs to exist.

Let's give this code a try:

#include <sys/attr.h>

int getattrlistbulk(int dirfd, struct attrlist * attrList, void * attrBuf, size_t attrBufSize, uint64_t options) {
    return 0;
}

This copy of getattrlistbulk will always return 0, no matter what.

If you save this code as UnityFixer.m, you can compile it with:

clang -compatibility_version 9999 -o UnityFixer.dylib -dynamiclib UnityFixer.m

Now, we need to make the game load this library, which should be easy to do with the DYLD_INSERT_LIBRARIES environment variable. Run in the Terminal:

DYLD_INSERT_LIBRARIES=UnityFixer.dylib Sayonara\ Wild\ Hearts.app/Contents/MacOS/Sayonara\ Wild\ Hearts

...and watch the game crash the exact same way it did before:

Dyld Error Message:
  Symbol not found: _getattrlistbulk
  Referenced from: /Users/USER/Desktop/Sayonara Wild Hearts.app/Contents/MacOS/../Frameworks/UnityPlayer.dylib
  Expected in: /usr/lib/libSystem.B.dylib

Why can't UnityPlayer.dylib find our fancy new getattrlistbulk function?

Let's look at the crash report again. UnityPlayer isn't expecting getattrlistbulk to be just anywhere, it's expecting it to be in /usr/lib/libSystem.B.dylib, and so isn't looking inside of our UnityFixer library. This is due to a concept called two-level namespaces, which you can and should read more about here. And, although two-level namespacing can be turned off via DYLD_FORCE_FLAT_NAMESPACE=1, Unity games won't work without it.

Let's try something else. What if we made the game load our library, UnityFixer.dylib, in place of libSystem.B.dylib? Apple's install_name_tool command makes this easy! Copy UnityFixer.dylib into the application bundle's Contents/Frameworks directory, and then run:

install_name_tool -change /usr/lib/libSystem.B.dylib @executable_path/../Frameworks/UnityFixer.dylib Sayonara\ Wild\ Hearts.app/Contents/Frameworks/UnityPlayer.dylib

(Replace Sayonara\ Wild\ Hearts with the name of your game.)

Now, try launching the game again, and...

Application Specific Information:
dyld: launch, loading dependent libraries

Dyld Error Message:
  Symbol not found: dyld_stub_binder
  Referenced from: /Users/USER/Desktop/Sayonara Wild Hearts.app/Contents/MacOS/Sayonara Wild Hearts
  Expected in: /Users/USER/Desktop/Sayonara Wild Hearts.app/Contents/MacOS/../Frameworks/UnityFixer.dylib
 in /Users/USER/Desktop/Sayonara Wild Hearts.app/Contents/MacOS/Sayonara Wild Hearts

Hey, at least the crash log changed this time! You can probably already guess why this didn't work. libSystem.B.dylib is essentially the Mac equivalent of Linux's libc, and as such, it contains many, many functions. UnityFixer only contains getattrlistbulk. And so although the game now has access to getattrlistbulk, it's missing everything else!

What we want is for our UnityFixer library to also provide all of the other libSystem functions in addition to its own—which is to say, we should make libSystem.B.dylib a sub-library of UnityFixer.dylib.

I did this using optool:

optool install -c reexport -p /usr/lib/libSystem.B.dylib -t Sayonara\ Wild\ Hearts.app/Contents/Frameworks/UnityFixer.dylib

(There's probably a cleaner way to link this at compile-time, but I couldn't figure out how, and optool worked.)

Now, try launching the game again, and...

It works!


Theoretical FAQs

(Follow-up questions no one has asked, but which they theoretically could.)

Q: Is the return value of 0 important?

A: Yes. I originally tried 1, but that caused some games to allocate all available memory and crash once none was left.

Q: Will this make all Unity games work properly in Mavericks?

A: No. As far as I'm aware, it will allow any Unity 2018/2019 game to start up, but no one said anything about working properly! Games are complex, and these ones have clearly never been tested on Mavericks before, so they may have other glitches. Timelie is one example of a game which technically works with this fix, but has very severe graphical problems.

Many other games, however, really do seem to run perfectly.

Q: Is it possible to actually reimplement getattrlistbulk instead of using a stub function?

A: Maybe? The Internet™ says getattrlistbulk is a replacement for getdirentriesattr. So, at one point I tried:

#include <sys/attr.h>
#include <unistd.h>

int getattrlistbulk(int dirfd, struct attrlist * attrList, void * attrBuf, size_t attrBufSize, uint64_t options) {
    
    unsigned int count;
    unsigned int basep;
    unsigned int newState;
    
    return getdirentriesattr(dirfd, &attrList, &attrBuf, attrBufSize, &count, &basep, &newState, options);
}

This was written by pattern-matching the example code for getattrlistbulk and getdirentriesattr in the developer documentation. It does work (insofar as games run), but then, so does return 0, so I really have no way to test the code. Under the circumstances, return 0 seems safer.

Q: Can I get a tl;dr?

A: Sure:

  1. Copy the first code block in this answer into a file named UnityFixer.m.
  2. clang -compatibility_version 9999 -o /Path/To/Game.app/Contents/Frameworks/UnityFixer.dylib -dynamiclib /Path/To/UnityFixer.m
  3. install_name_tool -change /usr/lib/libSystem.B.dylib @executable_path/../Frameworks/UnityFixer.dylib /Path/To/Game.app/Contents/Frameworks/UnityPlayer.dylib
  4. Download optool.
  5. /Path/To/optool install -c reexport -p /usr/lib/libSystem.B.dylib -t /Path/To/Game.app/Contents/Frameworks/UnityFixer.dylib


from Hacker News https://ift.tt/2OaJS1z

OpenGL Superbible

OpenGL SuperBible

Comprehensive Tutorial and Reference

by Graham Sellers, Richard S. Wright and Nicholas Haemel

The sixth edition of OpenGL® SuperBible, the newest member of the Addison Wesley OpenGL Technical Library, is now available!

OpenGL® SuperBible, Sixth Edition, is the definitive programmer's guide, tutorial, and reference for the world's leading 3D API for real-time computer graphics, OpenGL 4.3. The best all-around introduction to OpenGL for developers at all levels of experience, it clearly explains both the newest API and indispensable related concepts. You'll find up-to-date, hands-on guidance for all facets of modern OpenGL development on both desktop and mobile platforms, including transformations, texture mapping, shaders, buffers, geometry management, and much more.

Extensively revised, this edition presents many new OpenGL 4.3 features, including compute shaders, texture views, indirect draws, and enhanced API debugging. It has been reorganized to focus more tightly on the API, to cover the entire pipeline earlier, and to help you thoroughly understand the interactions between OpenGL and graphics hardware.

Coverage includes

  • A practical introduction to the essentials of realtime 3D graphics
  • Core OpenGL 4.3 techniques for rendering, transformations, and texturing
  • Foundational math for creating interesting 3D graphics with OpenGL
  • Writing your own shaders, with examples to get you started
  • Cross-platform OpenGL, including essential platform-specific API initialization material for Linux, OS X, and Windows
  • Vertex processing, drawing commands, primitive processing, fragments, and framebuffers
  • Using compute shaders to harness today's graphics cards for more than graphics
  • Monitoring and controlling the OpenGL graphics pipeline
  • Advanced rendering: light simulation, artistic and non-photo-realistic rendering, and deferred shading
  • Modern OpenGL debugging and performance optimization

The book's website is at

http://www.openglsuperbible.com/

and includes sample code and more.



from Hacker News https://ift.tt/3q2ZsJP

Meltano: ELT for the DataOps era

# Instantly containerizable and production-ready

Now that you've got your pipelines running locally, it'll be time to repeat this trick in production!

Since your Meltano project is your single source of truth, deploying your pipelines in production is pretty straightforward, but you can greatly simplify this process (and prevent issues caused by inconsistencies between environments!) by wrapping them all up into a project-specific Docker container image: "a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings."

This image can then be used on any environment running Docker (or a compatible tool like Kubernetes) to directly run meltano commands in the context of your project, without needing to separately manage the installation of Meltano, your project's plugins, or any of their dependencies.



from Hacker News https://meltano.com/

I built a 5K iMac Display on my own



from Hacker News https://twitter.com/phillipcaudell/status/1352692104707919872

Reddit CEO: Platform doesn't plan to ban pornography

In an interview with "Axios on HBO," Reddit CEO Steve Huffman said the company supports pornography on its platform, as long as it's not exploitative.

Why it matters: Most other social media platforms — such as Facebook, Instagram, YouTube and Tumblr — have banned pornographic content.

"You can look at [porn] as exploitative. And, indeed, much of it is. And that's not the content that we want on Reddit," Huffman said. "But there's another aspect that's empowering. And these are people sharing stories of themselves, pictures of themselves. And we are perfectly supportive of that."

  • "There are difficult decisions to make in this sphere, but we think they're worth making, as opposed to saying, you know, 'No sex at all,' for example," he said.
  • Of note: Twitter still allows pornographic content as well.

Huffman also spoke to Reddit's involvement in the GameStop frenzy. He told "Axios on HBO" he was proud of r/WallStreetBets, the forum on Reddit that was largely responsible for making GameStop's stock go haywire.

  • "That community exposed a gap between those who have access to the financial markets and those who are on the outside," Huffman said.
  • "In WallStreetBets you see a community, among many things, that is breaking in or trying to break through into that establishment."


from Hacker News https://ift.tt/3kB67cX

UK meteor: 'huge flash' as fireball lights up skies

A large meteor blazed across UK skies on Sunday night, delighting those lucky enough to spot it.

The meteor was spotted shortly before 10pm and was visible for around seven seconds. It was captured on doorbell and security cameras in Manchester, Cardiff, Honiton, Bath, Midsomer Norton and Milton Keynes.

The UK meteor network, a group of amateur astronomers that has been using cameras to record meteor sightings across the UK since 2012, said the meteor was a fireball, and wrote on Twitter, “From the two videos we saw it was a slow moving meteor with clearly visible fragmentation.”

Meteors are space matter burning up as they enter the earth’s atmosphere. Fireballs are particularly bright meteors that in theory might be visible in daylight. According to the American Meteor Society (AMS), fireballs are generally a magnitude -4, as bright as the planet Venus when seen in the evening or morning. A full moon is magnitude -12.6 while the sun is -26.7.

The AMS said that while “several thousand meteors of fireball magnitude occur in the Earth’s atmosphere each day”, most fall over the ocean or uninhabited areas.

The UK meteor network group said more than 120 people had reported seeing Sunday night’s meteor.

One Twitter user wrote of the fireball: “I first thought it was a bright star or plane, then it got bigger & faster, then a huge flash lit up the sky & it burst into a massive tail of orange sparks trailing behind like a giant firework!”

Also on Twitter, there were alien jokes, with references to Superman, The Day of the Triffids, Men in Black and War of the Worlds.

Others joked that it was revenge for Nasa’s Perseverance rover landing on Mars last week. The rover shared images and the first ever recording of what it sounds like on the red planet.

One user offered to prepare any arriving aliens a full english breakfast.



from Hacker News https://ift.tt/3b58CkR

The missing continent it took 375 years to find

It was 1642 and Abel Tasman was on a mission. The experienced Dutch sailor, who sported a flamboyant moustache, bushy goatee and penchant for rough justice – he later tried to hang some of his crew on a drunken whim – was confident of the existence of a vast continent in the southern hemisphere, and determined to find it.

At the time, this portion of the globe was still largely mysterious to Europeans, but they had an unshakeable belief that there must be a large land mass there – pre-emptively named Terra Australis – to balance out their own continent in the North. The fixation dated back to Ancient Roman times, but only now was it going to be tested.

And so, on 14 August, Tasman set sail from his company's base in Jakarta, Indonesia, with two small ships and headed west, then south, then east, eventually ending up at the South Island of New Zealand. His first encounter with the local Māori people did not go well: on day two, several paddled out on a canoe, and rammed a small boat that was passing messages between the Dutch ships. Four Europeans died. Later, the Europeans fired a cannon at 11 more canoes – it’s not known what happened to their targets. 

And that was the end of his mission – Tasman named the fateful location Moordenaers (Murderers) Bay, with little sense of irony, and sailed home several weeks later without even having set foot on this new land. While he believed that he had indeed discovered the great southern continent, evidently, it was hardly the commercial utopia he had envisaged. He did not return.

(By this time, Australia was already known about, but the Europeans thought it was not the legendary continent they were looking for. Later, it was named after Terra Australis when they changed their minds).

Little did Tasman know, he was right all along. There was a missing continent.



from Hacker News https://ift.tt/39VMRmI

Ask HN: Should I pass on a new job because they want me to sign a non-compete?

I took a few months off after the birth of my third child and am looking for a new job. I found an opportunity with a great team at an interesting company but there was a catch. After receiving an offer, I inquired if they required a non-compete and sure enough, there is a 1-year non-compete with the following conditions:

following the termination of my relationship with the Company for any reason, whether with cause or without cause, at the option either of the Company or myself, with or without notice

...

any business in competition with the Company's business as conducted by the Company during the course of my employment with the Company

I'm not a fan of non-competes generally but considering this was written to include any business that the Company believes is a competitor (no idea what kind of scope that entails) and asserts enforcement irrespective of who terminated the employment relationship, I told them I wasn't willing to sign it.

I have a friend who was pursued by a previous employer for violating a non-compete and even though he eventually won, it cost an immense amount of money, time (18 months!), and pain to fight.

I've also heard horror stories of being presented with a non-compete to sign after starting the new job and leaving previous employment. That kind of behavior seems especially devious, but it seems pretty common as well.

Am I making a mountain out of a molehill or should I stand my ground? Anyone else found themselves in a similar situation? Anyone been pursued by a previous employer due to a non-compete?



from Hacker News https://ift.tt/37W98iR

Making SoA Tollerable

Chandler Caruth (I think - I can't for the life of me find the reference) said something in a cppcon talk years ago that blew my mind. More or less, 95% of code performance comes from the memory layout and memory access patters of data structures, and 5% comes from clever instruction selection and instruction stream optimization.

That is...terrible news! Instruction selection is now pretty much entirely automated. LLVM goes into my code and goes "ha ha ha foolish human with your integer divide by a constant, clearly you can multiply by this random bit sequence that was proven to be equivalent by a mathematician in the 80s" and my code gets faster. There's not much I have to worry about on this front.

The data structures story is so much worse. I say "I'd like to put these bytes here" and the compiler says "very good sir" in sort of a deferential English butler kind of way. I can sense that maybe there's some judgment and I've made bad life choices, but the compiler is just going to do what I told it. "Lobster Thermidor encrusted in Cool Ranch Doritos, very good sir" and Alfred walks off to leave me in a hell of L2 cache misses of my own design that turn my i-5 into a 486.

I view this as a fundamental design limitation of C++, one that might someday be fixed with generative meta-programming (that is, when we can program C++ to write our C++, we can program it to take our crappy OOPy-goopy data structures and reorganize them into something the cache likes) but that is the Glorious Future™. For now, the rest of this post is about what we can do about it with today's C++.

There Is Only Vector

To go faster, we have to keep the CPU busy, which means not waiting for memory. The first step is to use vector and stop using everything else - see the second half of Chandler's talk. Basically any data structure where the next thing we need isn't directly after the thing we just used is bad because the memory might not be in cache.

We experienced this first hand in X-Plane during the port to Vulkan. Once we moved from OpenGL to Vulkan, our CPU time in driver code went way down - 10x less driver time - and all of the remaining CPU time was in our own code. The clear culprit was the culling code, which walks a hierarchical bounding volume tree to decide what to draw.

I felt very clever when I wrote that bounding volume tree in 2005. It has great O(N) properties and lets us discard a lot of data very efficiently. So much winning!

But also, it's a tree. The nodes are almost never consecutive, and a VTune profile is just a sea of cache misses each time we jump nodes. It's slow because it runs at the speed of main memory.

We replaced it with a structure that would probably cause you to fail CS 102, algorithms and data structures:

1. A bunch of data is kept in an array for a a sub-section of the scenery region.

2. The sub-sections are in an array.

And that's it. It's a tree of fixed design of depth two and a virtually infinite node count.

And it screams. It's absurdly faster than the tree it replaces, because pretty much every time we have to iterate to our next thing, it's right there, in cache. The CPU is good at understanding arrays and is going to get the next cache line while we work. Glorious!

There are problems so big that you still need O(N) analysis, non-linear run-times, etc. If you're like me and have been doing this for a long time, the mental adjustment is how big N has to be to make that switch. If N is 100, that's not a big number anymore - put it in an array and blast through it.

We Have To Go Deeper

So far all we've done is replaced every STL container with vector. This is something that's easy to do for new code, so I would say it should be a style decision - default to vector and don't pick up sets/maps/lists/whatever unless you have a really, really, really good reason.

But it turns out vector's not that great either. It lines up our objects in a row, but it works on whole objects. If we have an object with a lot of data, some of which we touch all of the time and some of which we use once on leap years, we waste cache space on the rarely used data. Putting whole objects into an array makes our caches smaller, by filling them up with stuff we aren't going to use because it happens to be nearby.

Game developers are very familiar with what to do about it - perhaps less so in the C++ community: vector gives us an array of structures - each object is consecutive and then we get to the next object; what we really want is a structure of arrays - each member of the object is consecutive and then we hit the next object.

Imagine we have a shape object with a location, a color, a type, and a label. In the structure of arrays world, we store 4 shapes by storing: [(location1, location2, location3, location4), (color 1, color 2, color3, color4), (type 1, type2, type3, type 4), (label 1, label2, label3, label4)].

First, let's note how much better this is for the cache. When we go looking to see if a shape is on screen, all locations are packed together; every time we skip a shape, the next shape's location is next in memory. We have wasted no cache or memory bandwidth on thing we won't draw. If label drawing is turned off, we can ignore that entire block of memory. So much winning!

Second, let's note how absolutely miserable this is to maintain in C++. Approximately 100% of our tools for dealing with objects and encapsulations go out the window because we have taken our carefully encapsulated objects, cut out their gooey interiors and spread them all over the place. If you showed this code to an OOP guru they'd tell you you've lost your marbles. (Of coarse, SoA isn't object oriented design, it's data oriented design. The objects have been minced on purpose!)

Can We Make This Manageable?

So the problem I have been thinking about for a while now is: how do we minimize the maintenance pain of structures of arrays when we have to use them? X-Plane's user interface isn't so performance critical that I need to take my polymorphic hierarchy of UI widgets and cut it to bits, but the rendering engine has a bunch of places where moving to SoA is the optimization to improve performance.

The least bad C++ I have come up with so far looks something like this:

struct scenery_thingie {

    int            count;

    float *        cull_x;

    float *        cull_y;

    float *        cull_z;

    float *        cull_radius;

    gfx_mesh *     mesh_handle;


    void alloc(UTL_block_alloc * alloc, int count);

    scenery_thingie& operator++();

    scenery_thingie& operator+=(int offset);

};

You can almost squint at this and say "this is an object with five fields", and you can almost squint and this and say "this is an array" - it's both! The trick is that each member field is a base pointer into the first object (of count's) member field, with the next fields coming consecutively. While all cull_y fields don't have to follow cull_x in memory, it's nice if they do - we'd rather not have them on different VM pages, for example.

Our SoA struct can both be an array (in that it owns the memory and has the base pointers) but it can also be an iterator - the increment operator increments each of the base pointers. In fact, we can easily build a sub-array by increasing the base pointers and cutting the count, and iteration is just slicing off smaller sub-arrays in place - it's very cheap.

This turns out to be pretty manageable! We end up writing *iter.cull_x instead of iter->cull_x, but we more or less get to work with our data as expected.

Where Did the Memory Come From?

We have one problem left: where did the memory come from to allocate our SoA? We need a helper - something that will "organize" our dynamic memory request and set up our base pointers to the right locations. This code is doing what operator new[] would have done.

class UTL_block_alloc {

public:


    UTL_block_alloc();


    template<typename T>

    inline void alloc(T ** dest_ptr, size_t num_elements);


    void *    detach();

};

Our allocation block helper takes a bunch of requests for arrays of T's (e.g. arbitrary types) and allocates one big block that allocates them consecutively, filling in dest_ptr to point to each one. When we call detach, the single giant malloc() block is returned to be freed by client code.

We can feed any number of SoA arrays via a single alloc block, letting us pack an entire structure of arrays of structures into one consecutive memory region. With this tool, "alloc" of an SoA is pretty easy to write.

void scenery_thingie::alloc(UTL_block_alloc * a, int in_count)

{

    count = in_count;

    a->alloc(&cull_x,c);

    a->alloc(&cull_y,c);

    a->alloc(&cull_z,c);

    a->alloc(&cull_r,c);

    a->alloc(&mesh_handle,c);

}

A few things to note here:

  • The allocation helper is taking the sting out of memory layout by doing it dynamically at run-time. This is probably fine - the cost of the pointer math is trivial compared to actually going and getting memory from the OS.
  • When we iterate, we are using memory to find our data members. While there exists some math to find a given member at a given index, we are storing one pointer per member in the iterator instead of one pointer total.

One of these structs could be turned into something that looks more like a value type by owning its own memory, etc. but in our applications I have found that several SoAs tend to get grouped together into a bigger 'system', and letting the system own a single block is best. Since we have already opened the Pandora's box of manually managing our memory, we might as well group things complete and cut down allocator calls while getting better locality.

Someday We'll Have This

Someday we'll have meta-programing, and when we do, it would be amazing to make a "soa_vector" that, given a POD data type, generates something like this:

struct scenery_thingie {

    int            count;

    int            stride

    char *         base_ptr;

    float&         cull_x() { return (*(float *) base_ptr); }

    float&         cull_y() { return *((float *) base_ptr + 4 * stride); }

    float&         cull_z() { return *((float *) base_ptr + 8 * stride); }

    /* */

};


I haven't pursued this in our code because of the annoyance of having to write and maintain the offset-fetch macros by hand, as well as the obfuscation of what the intended data layout really is. I am sure this is possible now with TMP, but the cure would be worse than the disease. But generative meta-programming I think does promise this level of optimized implementation from relatively readable source code.

Nitty Gritty - When To Interleave

One last note - in my example, I split the X, Y and Z coordinates of my culling volume into their own arrays. Is this a good idea?  If it was a vec3 struct (with x,y,z members) what should we have done?

The answer is ... it depends? In our real code, X, Y and Z are separate for SIMD friendliness - a nice side effect of separating the coordinates is that we can load four objects into four lanes of a SIMD register and then perform the math for four objects at once. This is the biggest SIMD win we'll get - it is extremely cache efficient, we waste no time massaging the data into SIMD format, and we get 100% lane utilization. If you have a chance to go SIMD, separate the fields.

But this isn't necessarily best. If we had to make a calculation based on XYZ, together, and we always use them together and we're not going to SIMD them, it might make sense to pack them together (e..g so our data went XYZXYZXYZXYZ, etc.). This would mean fetching position would require only one stride in memory and not three. It's not bad to have things together in cache if we want them together in cache.




from Hacker News https://ift.tt/37RDdAj

Show HN: QueryCal – calculate metrics from your calendars using SQL

Comments

from Hacker News https://querycal.com

We should pay politicians more

On Friday, much of “the discourse” was taken up by this piece in The Times, which focuses on the travails of Boris Johnson’s Downing Street. Online discussion mostly centred around this section:

On the personal front, they say, Mr Johnson, 56, is worried and complaining about money. He is still supporting, to different degrees, four out of his six children, has been through an expensive divorce and had his income drop by more than half as a result of fulfilling his lifetime ambition.

As a backbench MP, with his Daily Telegraph column netting him £275,000 and lucrative speaking engagements, he was earning well in excess of £350,000 a year. His prime ministerial salary of about £150,000 might seem perfectly sufficient — but that is not what he actually receives. His use of the flat that he shares with his fiancée, Carrie Symonds, above Number 11 is taxed as a benefit in kind. Any food sent up from the Downing Street kitchen has to be paid for and if they want to have friends to stay at Chequers — Covid restrictions permitting — they receive a bill from the government.

Stop rolling your eyes and set aside your feelings about this particular Prime Minister for a minute. That the political leader of this country is paid just ~£150,000- an £81,932 salary for being an MP and the rest for being leader of the country- is fucked beyond belief.

To put the PM’s pay into context, 667 people at local authorities across Britain earn more than him.

This isn’t an argument against their pay levels (I’m sure they’re very good at their jobs), but that remuneration for the office of Prime Minister should at least attempt to reflect the prestige and importance of the job.

It’s not always been like this. The salary for the role of Prime Minister was £10,000 when it was first set in the Ministers of the Crown Act 1937- just under £690,000 in today’s money. Admittedly pay for the role has ebbed and flowed over time, but as late as 2010 pay was £193,885 (~£250,000 in today’s money). There’s clear historical precedent for paying the PM more.

Increasing the pay of the Prime Minister would also serve to materially decrease the number of daft media stories we see complaining that “X public servant is paid more than the PM!”. At best these stories are a distraction, at worst they actively hinder the quality of people the public sector can employ.

Of course, I’m not daft enough to argue that the role of Prime Minister is subject to the normal rules of the labour market. The job of PM could pay nothing and would still be the subject of intense interest and jockeying from sitting MPs, the pool of candidates from which we get our Prime Ministers. Instead the problem lays with the quality of that pool- we need to significantly increase MP pay to get a better MPs.

That there is a quality problem with many of our MPs is beyond doubt. In 2012 the Royal Statistical Society tested the mathematical ability of all MPs by asking them the probability of getting two heads if a fair coin is flipped twice. Terrifyingly, 60% got this simple probability question wrong. We’ve had three elections since this experiment was conducted, and I’d put money on this result being worse if it was rerun in 2020. Our politicians are getting thicker.

I know plenty of bright people from across the political spectrum who would be fantastic MPs, but would never countenance becoming one. As documented in Isabel Hardman’s book “Why We Get the Wrong Politicians”, the job of an MP has got materially worse over the last twenty years: demand from constituents is higher, not just through the increase in worthy casework, but also the endless deluge of smart-arsey “policy” complaints; it increasingly absorbs every hour of your life; and it’s relatively low status compared to equivalent professions.

Add to that the obvious job insecurity; that it’s possible to be elected and, through no fault of your own, have no chance of a sniff of power (imagine being a Labour MP first elected in 2010, or a tory elected in 1997); and the fact that some of the perks of the job, such as the “resettlement grant”, the pension and the permissive expenses system, have disappeared in recent years, then it becomes clear why many talented people shy away from spending the most profitable years of their life in Parliament. The solution is to make the job more attractive to our most talented people by paying more.

That politics doesn’t pay isn’t just a problem at a national level. The Mayor of Tees Valley, a role which sees heading up a Combined Authority, representing almost 700,000 people and responsible for multi-million pound investment fund, is paid an embarrassing £35,700.

Similarly, most of our 20,000 local councillors are paid embarrassingly small “allowances”. There are plenty of decent, diligent and smart councillors, but those of you who have had any involvement in politics will know there are many who, well, aren’t…

This can only be fixed by making the role attractive to talented newcomers, avoiding the incongruous position where low quality elected representatives are paid a pittance to set the policies that are followed by council officers on 6 figure salaries. It’s no surprise that in the last LGA census 43% of councillors were aged over 65- they’re the only group that can afford to do it.

I’m aware of the irony of using the first issue of a newsletter called “normielisation” to argue for something as drastically unpopular as increasing politicians pay, but frankly you get what you pay for. Being a politician will always have costs for the individual over other careers, but, if we want to attract the best people, the financial incentives shouldn’t be as poor as they are now.

Just look at our stupid, turgid response to the Coronavirus crisis- wouldn’t it be nice if we paid for better politicians?

What I’ve been reading recently

Underground, Overground: A Passenger’s History of the Tube- Andrew Martin: Despite living in zone 2 of London, I’ve not been on the tube since March. Perhaps bizarrely, I miss it. In a sick pique of subterranean nostalgia, I pulled this off my bookshelf to read. It’s a good humoured and fairly comprehensive history of the world’s oldest underground rail network. I was particularly struck by how hodgepodge and unplanned the evolution of the system we now know and tolerate has been. Recommended if only for this discussion on whether a gentleman can get electrocuted by weeing on a live rail.

Science Fictions- Stuart Ritchie: An incredibly useful, “down to earth and surprisingly funny” primer on what has gone wrong with science in recent years, and how it can be fixed. I particularly recommend the section on “how to read a scientific paper”, which I expect to return back to many times over the years.

Full Disclosure: I’m a pal of Stuart’s and read an early draft of some of this to help him cut the number of Coldplay references and ensure a “literate moron” could understand it. Obviously this means I take full credit for any funny jokes within.

Britain’s Prisons aren’t working- Sam Ashworth-Hayes: this fantastic Spectator piece, which solves a philosophical tension I’ve struggled with for some time: how to reconcile my reactionary view that our current approach to rehabilitation fundamentally doesn’t work, with my soft wet lib-ish view that Prison is not a very nice place to be. Seems the answer is longer sentences for violent criminals, but spending more to make prison a more dignified place!



from Hacker News https://ift.tt/34c8KdJ

Screen scraping and TinyML can turn any dial into an API

https://github.com/jomjol/AI-on-the-edge-device

This image shows a traditional water meter that’s been converted into a web API, using a cheap ESP32 camera and machine learning to understand the dials and numbers. I expect there are going to be billions of devices like this deployed over the next decade, not only for water meters but for any older device that has a dial, counter, or display. I’ve already heard from multiple teams who have legacy hardware that they need to monitor, in environments as varied as oil refineries, crop fields, office buildings, cars, and homes. Some of the devices are decades old, so until now the only option to enable remote monitoring and data gathering was to replace the system entirely with a more modern version. This is often too expensive, time-consuming, or disruptive to contemplate. Pointing a small, battery-powered camera instead offers a lot of advantages. Since there’s an air gap between the camera and the dial it’s monitoring, it’s guaranteed to not affect the rest of the system, and it’s easy to deploy as an experiment, iterating to improve it.

If you’ve ever worked with legacy software systems, this may all seem a bit familiar. Screen scraping is a common technique to use when you have a system you can’t easily change that you need to extract information from, when there’s no real API available. You take the user interface results for a query as text, HTML, or even an image, ignore the labels, buttons, and other elements you don’t care about, and try to extract the values you want. It’s always preferable to have a proper API, since the code to pull out just the information you need can be hard to write and is usually very brittle to minor changes in the interface, but it’s an incredibly common technique all the same.

The biggest reason we haven’t seen more adoption of this equivalent approach for IoT is that training and deploying machine learning models on embedded systems has been very hard. If you’ve done any deep learning tutorials at all, you’ll know that recognizing digits with MNIST is one of the easiest models to train. With the spread of frameworks like TensorFlow Lite Micro (which the example above apparently uses, though I can’t find the on-device code in that repo) and others, it’s starting to get easier to deploy on cheap, battery-powered devices, so I expect we’ll see more of these applications emerging. What I’d love to see is some middleware that understands common displays types like dials, physical or LED digits, or status lights. Then someone with a device they want to monitor could build it out of those building blocks, rather than having to train an entirely new model from scratch.

I know I’d enjoy being able to use something like this myself. I’d use a cell-connected device to watch my cable modem’s status, so I’d know when my connection was going flaky, I’d keep track of my mileage and efficiency with something stuck on my car’s dash board looking at the speedometer, odometer and gas gauge, it would be great to have my own way to monitor my electricity, gas, and water meters, I’d have my washing machine text me when it was done. I don’t know how I’d set it up physically, but I’m always paranoid about leaving the stove on, so something that looked at the gas dials would put my mind at ease.

There’s a massive amount of information out in the real world that’s can’t be remotely monitored or analyzed over time, and a lot of it is displayed through dials and displays. Waiting for all of the systems involved to be replaced with connected versions could take decades, which is why I’m so excited about this incremental approach. Just like search engines have been able to take unstructured web pages designed for people to read, and index them so we can find and use them, this physical version of screen-scraping takes displays aimed at humans and converts them into information usable from anywhere. A lot of different trends are coming together to make this possible, from cheap, capable hardware, widespread IoT data networks, software improvements, and the democratization of all these technologies. I’m excited to do my bit to hopefully help make this happen, and I can’t wait to see all the applications that you all come up with, do let me know your ideas!

Like this:

Like Loading...

Related



from Hacker News https://ift.tt/3dPuJ0m

Procfs: Processes as Files (1984) [pdf]

Comments

from Hacker News https://ift.tt/356I1QK

JSON parser written in 6502 assembly language

I was watching TV, and there was a commercial which proclaimed, "It's time to do what you want!" I replied to the TV, "It's time to write a JSON parser in 6502 assembly language?" Somehow I don't think that's what they had in mind, but the TV is right, I should do what I want.

So, here is my JSON parser. The core parser is written entirely in 6502 assembly language, and is meant to be assembled with ca65. However, it is meant to be called from C, and uses the cc65 calling convention (specifically, the fastcall convention).

JSON65 should work on any processor in the 6502 family. (It does not use any 65C02 instructions.)

The assembly language parts of JSON65 use the zero page locations used by cc65, in a way which is compatible with the C calling convention.

JSON65 should work on any target supported by the cc65 toolchain. I have tested it on sim65 and on an unenhanced Apple //e.

Parser (json65.h)

JSON65 is an event-driven (SAX-style) parser, so the parser is given a callback function, which it calls for each event.

JSON65 supports incremental parsing, so you can freely feed it any sized chunks of input, and you don't need to have the whole file in memory at once.

JSON65 is fully reentrant, so you can incrementally parse several files at once if you so desire.

JSON65 does have a couple of limits: strings are limited to 255 bytes, and the nesting depth (of nested arrays or objects) is limited to 224. However, there is no limit on the length of a line, or the length of a file.

JSON65 uses 512 bytes of memory for each parser, which must be allocated by the caller. JSON65 does not use dynamic memory allocation.

In accordance with the JSON specification, JSON65 assumes its input is UTF-8 encoded. However JSON65 does not validate the UTF-8, so any encoding can be used, as long as all bytes with the high bit clear represent ASCII characters. Bytes with the high bit set are only allowed inside strings. The only place where JSON65 assumes UTF-8 is in the processing of \u escape sequences. In accordance with the JSON specification, a single \u escape can be used to specify code points in the Basic Multilingual Plane, and two consecutive \u escapes (a UTF-16 surrogate pair) can be used to specify a code point outside the Basic Multilingual Plane. These escapes will be translated into the proper UTF-8.

Because JSON only allows newlines in places where arbitrary whitespace is allowed, JSON65 is agnostic to the type of line ending. (CR, LF, or CRLF.) For the purposes of counting line numbers for error reporting, JSON65 handles CR, LF, or CRLF line endings.

JSON65 will parse numbers which fit into a 32-bit signed long, and will provide the long to the callback. All other numbers (i. e. floating point numbers, or integers which overflow a 32-bit long) are provided to the callback as a string. (Like strings, numbers cannot be more than 255 digits long.)

The callback function may return an error if it wishes. This will cause parsing to stop immediately, and the error code returned by the callback will be returned by j65_parse(). Error codes are negative numbers, and the user may use the codes from J65_USER_ERROR to -1, inclusive, for their own error codes.

Tree interface (json65-tree.h)

If you use the event-driver parser, you'll need to build your own data structure (or otherwise handle the data somehow) as the events come in. If you don't want to do that, you can use the tree interface (json65-tree.h) instead, which builds up a data structure for you. This only works for small files, because the entire tree has to fit in memory at once.

Unlike the event-based parser, the tree interface uses dynamic memory allocation.

Printing JSON (json65-print.h)

Mostly, JSON65 is a parser. However, it does have some support for printing JSON back to a file, in json65-print.h. The function j65_print_tree() will print a JSON tree (from the tree interface in json65-tree.h) to a given filehandle. It prints the entire JSON tree on a single line with no whitespace. This is the most compact format for a machine-readable JSON file, but it is not particularly human-readable.

If you write your own code to print JSON, either because you want to pretty-print it, or because you are using a data structure other than j65_node, you may still want to use the function j65_print_escaped() from json65-quote.h. It handles escaping a string using the JSON escape sequences.

API documentation

I don't have any fancy Doxygen documentation, but the API is documented by comments in the header files. If you wish to use the event-driven parser, read json65.h. If you wish to use the tree interface, read json65-tree.h.

Library organization

If you simply wish to use the event-driven (SAX-style) parser, you only need one header file (json65.h) and one assembly file (json65.s). However, there are some helper functions in other files, which you can optionally use with JSON65 if you like. Most notable is the tree interface to JSON65, which you may use instead of the event-driven interface for small files.

Each header file corresponds directly to one implementation file. Some of the implementation files are written in assembly language, and some are written in C. Here is a description of each, along with the size of the machine code of the implementation (CODE section plus RODATA section; none of the implementation files have any DATA or BSS).

  • json65.h (2240 bytes) - The core, event-driven parser. This is the only file that is required if you wish to build your own data structure.
  • json65-string.h (291 bytes) - This implements a string intern pool which is used by the tree interface.
  • json65-tree.h (1300 bytes) - The tree interface, which builds up a tree data structure as the file is parsed. You may then traverse the tree to your heart's content.
  • json65-quote.h (226 bytes) - This has a function which prints strings, replacing special characters with the escape sequences from the JSON specification. It is used by the tree printer, but can also be used standalone if you are printing JSON files yourself without using the tree interface.
  • json65-print.h (710 bytes) - Prints a tree to a file as JSON. Use this if you are using the tree interface, and wish to write JSON files as well as read them.
  • json65-file.h (1378 bytes) - Provides a helper function to feed data to the parser from a file, in chunks, and to display error messages to the user (including printing the offending line, and printing a caret to indicate the offending position of the line).

I hate build systems (or at least, build systems for C code), so I have not provided one. (Other than a lame little Perl script to build and run the tests using sim65.) Instead, I encourage you to copy the source files and header files you need into your own project, and use whatever build system you are already using for your project. (Such as the GNU Make based cc65 build system.)

You can use the following dependency graph to determine which source files you will need to copy into your project. (For each source file, you will also need to copy the corresponding header file.) Source files with no dependencies (such as json65.s) are at the top of the graph, while the source file with the most dependencies (json65-print.c) is at the bottom of the graph.

                json65.s    json65-string.s
                  /  \         /
                 /    \       /
                /      \     /
     json65-file.c    json65-tree.c     json65-quote.s
                           \             /
                            \           /
                             \         /
                            json65-print.c

If you wish to build and run the tests, simply run the run-test.pl Perl script at the top level of the repository. (It takes no arguments.) You'll need to have the cc65 toolchain installed.

Note: version 2.17 and earlier of sim65 have a bug in the implementation of the BIT instruction, so the tests will fail. You'll need a more recent version to get the tests to pass. (This only affects the simulation of the tests. If you plan on running JSON65 on real hardware, or on an emulator other than sim65, then you'll be fine with an older version of cc65.)

License

JSON65 is licensed under the zlib/libpng license, which is approved by the OSI and the FSF.



from Hacker News https://ift.tt/37W6YQB

Reviewing the Book Review

Comments

from Hacker News https://ift.tt/3bEQFIV

Squeak: A Free Smalltalk System – On RISC OS

Squeak: A Free Smalltalk system




What is Squeak?
Squeak is a free Smalltalk system originally released by a team including Alan Kay, Dan Ingalls, Ted Kaehler, John Maloney and Scott Wallace in 1996 when they were working at Apple. You might recognise the first three names from early Smalltalk papers from Xerox PARC. They produced a rather nice Smalltalk system with the unusual virtue that both the image and the Virtual Machine are open source - i.e. free, gratis and "no charge to you sir".
Finding Out More About Squeak
To find most of the web resources for Squeak, look at the Squeak.Org site. There are lots of pointers to information about Smalltalk, instructions for downloading Squeak, tutorials, FAQs etc. I won't waste space by duplicating any of it here. I do most strongly recommend that you read many of them.
Squeak runs on...
Macs, iPhones, most UNIX & Linux systems, Windows of various versions, RISC OS and some obscure specialised systems. See the above mentioned master page for details on how to get the files.
I’ve spent many years making Smalltalk available for RISC OS and other ARM based systems including the original Acorn Archimedes & RPC desktops, the Active Book, an early prototype version of the Compaq ‘iPaq’ handheld, the Interval Research ‘MediaPad’, an HP prototype pad-thing and other stuff still secret.
New News of a newsish nature
2013 - Squeak is back on RISC OS! Those nice people at the Raspberry Pi Foundation sent me a Pi; it has RISC OS on it and I’ve been getting things working on it.

IMG_0467


It runs quite nicely in general; the Pi’s RISC OS graphics kernel is a bit slow seeming right now but there is work being done that should improve that significantly. It supports Scratch as well and runs it decently - though there is a lot of work being done to improve that, too. Somewhat perversely, MIT decided to rewrite Scratch in Flash (belch) ‘for better browser support’ and seem to have abandoned the ‘old’ Squeak based system. Since Flash doesn’t run on RISC OS nor indeed on ARM systems in general, we’ll be supporting ‘old’ Scratch for a while.
You can download a copy of Squeak for RISC OS from the central squeakvm.org site.
Currently I’m working for the Pi foundation to improve Scratch under Raspbian (their linux version) by rewriting some of the more egregiously ugly code, improve algorithms, tweak vm configurations and so on. As of early-2014 it’s significantly faster than the original version with a fair bit more to come. A major project has been porting the code forward to the latest Squeak image so that it can run on the most modern VMs; right now it is using the ‘StackVM’. I hope to get the newer design dynamic translating VM working soon.
When and if possible all of this will get moved over to RISC OS but making a living comes first!

Building the VM with VMMaker
I also developed and for many years maintained the VMMaker package, the lump of Squeak code that defines and generates the bulk of the VM. See the VMMaker page on the Squeak Swiki for more info. You can fetch the VMMaker package from SqueakMap or use the SqueakMap tool in the image and look for (guess what) VMMaker. You will also need a SubVersion client so that you can fetch the handwritten parts of the VM source code from the repository.
Once you have mastered the complexities of the VMMaker and successfully built yourself a custom VM you should download this certificate to attest to your mighty geekiness.
Stuff wot I wrot
• I contributed a chapter describing the structure, function, design and implementation of virtual machines and the lowest level of Smalltalk code to "Squeak: Open Personal Computing and Multimedia" edited by Mark Guzdial and Kim Rose, published by Prentiss-Hall. An online version of that chapter is here.
• I worked on a realtime OS in Squeak whilst employed at Interval Research Corp
• A short paper on making BitBlt work for little-endian machines without having an intermediate display-on-screen conversion
Squeak logo artwork
At the dawn of Squeak-time we needed a logo. Every project needs a logo. I designed one, it caught on and can be found all over the web, on T-shirts, sweatshirts, books, badges, underwear, hats and probably secret spy satellites in geosynchronous orbit. (No, seriously; there is now at least one satellite running Squeak code!)
Here are some files of the Squeak logo that you may like to use:-

  • 32x32
  • 48x48
  • 64x64
  • as a pdf and therefore scalable

Feel free to download them and use them for links etc. If you'd like any other size, I can easily generate them for you from vector artwork. If you want to use it for a project of some sort relating to Squeak you are most welcome to do so - if you are making a neat badge or shirt or publishing a book I’d love a copy if at all practical.



from Hacker News https://ift.tt/3sHcoa5