13 Apr 2017

Crypto-currency Hype Has Gotten Far Ahead of Reality

I was listening to a podcast where a Bitcoin enthusiast was bragging about transactions completing in 10 minutes. Well, IMPS and RTGS transactions complete in seconds, and have been doing so for years, so Bitcoin is a step back. Even 10 minutes is an exaggeration — average transaction times have at times exceeded 2 hours:

Slowness isn't the only problem. Sometimes, transactions have failed as the network became overloaded. Who wants an unreliable payment system?

Then, Bitcoin can be user-unfriendly, as a crypto-currency expert says.

Crypto-currencies are also insecure. The two major contenders — Bitcoin and Ethereum — have both been hacked. Bitcoin alone has been hacked multiple times. Mt. Gox, a Bitcoin bank, lost a massive $460 million to hackers. It turned out that the hackers had been skimming money for years without Mt. Gox's knowledge. Then, in 2016, another Bitcoin company, this time Bitfinex, was hacked for $65 million.

"There is a long tradition of blaming the victim in the bitcoin community," says a CS professor who researches crypto-currencies.

It's not just Bitcoin that has been hacked. Another crypto-currency, Ethereum, has had $50 million stolen.

If my bank were hacked, I have recourse. The government will get involved and do something about it, such as forcing the bank to pay me out of their own pocket. There's also deposit insurance [1]. Bitcoin, by virtue of operating outside the traditional financial system, has no such safeguards. I wouldn't put my money in a system that has no safeguards and has been repeatedly hacked. All the more so when I may be the one who ends up having to bail out other customers.

Leaving security aside, Bitcoin's value has been extremely volatile, losing as much as 78% value at one time.

Why would you save your money or maintain a balance in such a volatile currency, anymore than in Zimbabwean dollars?

A real currency like the rupee, which isn't the most stable currency in the world, has lost only 19% value in this time period, peak to trough.

Crypto-currencies are also fragmented. If I have Bitcoin, and you have Ethereum, we can't seamlessly transact. Crypto-currency enthusiasts should make gateways to make this seamless, just as I can buy an app in dollars using my rupee-denominated credit card.

Worse, even a single crypto-currency like ethereum fragmented, because the developers couldn't agree on whether to reverse the aforementioned hack. We now have Ethereum (ETH) and Ethereum Classic (ETC), which are now two different currencies, with different values. Someone who accepts one isn't guaranteed to accept the other.

Talking of fragmentation, there are more than 700 cryptocurrencies! Do the people behind the 700th one really think their currency has something unique to offer that the first 699 don't? Even if you were to look only at currencies with a market cap of $10 million or more, there are 52 of them!

This is a perennial problem with open-source, where too many people don't understand the value of unity, that their efforts will yield more fruit if put behind one of the market leaders in the space, rather than doing their own thing. Doing your own thing is fine if it's a radically different vision, say a multi-currency system like Ripple as compared to a single-currency system like Bitcoin. But most of the time, the new thing is different only in minor ways, which the founder may be passionate about, but nobody else cares about. The value of putting all your wood behind a few arrows exceeds that of a minor difference splitting the community. We don't need the 700th crypto-currency.

Fragmentation is particularly a problem for crypto-currencies, whose value derives entirely from network effects. Unlike, say, a text editor, In other words, I could theoretically use a text editor of which I'm the only user in the world, if it works better for me than the others. But I can't use a crypto-currency that no one else accepts. Fragmentation in crypto-currencies is even worse than fragmentation in text editors [2].

Moving on from fragmentation, some crypto-currencies like Bitcoin consume thousands of times more energy than a credit-card swipe. That's fine as long as they're niche, but they can't scale. And not just because of energy consumption but also due to other factors like blockchain size and increasing transaction fees. Bitcoin can handle only 7 transactions per second, because of a limit in the protocol, while Visa handles 2000 per second.

Then there are transaction charges, which are sometimes more than credit-card transactions.

Finally, Bitcoin has a toxic community and politics, with DDoS attacks against people who disagree on technical issues, and even death threats.

A major contributor to Bitcoin community says it has failed. Maybe that's true, maybe not, but when a prominent member says that, it should give you pause before rushing in to jump on the newest bandwagon.

The Gartner Hype Cycle

... says that when a new technology comes out, expectations get out of hand, leading to disillusionment when we compare reality with the rosy promises:

But eventually, we realise that the new technology has some use, even if it wasn't as much as was promised. We then reach a plateau of productivity, where we benefit from the technology, even if in a more modest way than was claimed.

Crypto-currencies now have inflated expectations. The reality is nowhere as good as promised. I do think they have significant long-term potential, but as with any new technology, we should be careful not to confuse long-term potential for today's reality. I won't be opening a crypto-currency wallet this year, and neither should most readers of this blog post.

[1] Even if for a small amount, like ₹1 lac, in India.

[2] Though the point still holds that, of the dozens of text editors that exist, the developers of the least popular ones would be better off contributing their efforts to one of the market leaders.

29 Mar 2017

Most Indian Startups Will Fail

A fascinating article by Haresh Chawla examines everything that's wrong with the Indian startup sector, and why 97% of them will fail:

  • Indians are cost-driven, not convenience-driven. We have all the time in the world to research and find the best price. We spend little and that too carefully. We hate paying for service.
  • Only 10% of Indians can pay the kind of money startups assume, but startups don't take that into account. They don't design a business model that works for the 90%.
  • Startups attract the wrong customers, the 90% chasing discounts and freebies, who leave as soon as the discounts end.
  • Startups transplant business models from the US, and VCs happily fund them, thinking they're less dangerous. Startups should, instead of cloning their role models, understand why they've done what they've done.
  • The unit economics don't work out: each transaction generates a loss to the company.
  • Startups often don't understand their competitors, like the boy in the restaurant. He's probably idle at the time of delivering an order, so it's almost zero cost for him to deliver one, unlike the startup.
  • Startups scale up, requiring more overhead to handle the scale, causing costs to spiral.
  • Sometimes, the management team doesn't even know what their unit economics are.
  • Profitability is far off, like at least 7 years for grocery startups.
  • Indian startups have too many employees. Ola has more people to run its India operations than Uber has to run operations in 50 countries! Many of these employees have high salaries and work in posh offices.
  • India's internet economy will take 7 or so years to reach where China is today.
  • When a startup fails, the entrepreneurs say they learnt a lot, but even that is often untrue. Running a company for years with negative unit economics doesn't count.
  • The winners from this whole game are often Google and Facebook, who make money from the ads the startups run.

28 Mar 2017

How to Clean up an Online Community

Many communities like Reddit and Youtube have toxic comments. These dis-incentivise others from participating, upset well-intentioned participants, and earning the site a bad reputation.

How do you clean up such a site? I think the way is to identify the most toxic individuals, ban them, and delete all their comments and replies [1]. I think it's only a few mis-behaving people who in turn provoke others to respond in kind, or convey that this kind of behavior is normal or otherwise acceptable in this forum. If you find and delete the worst offenders, it will have a huge improvement on the quality of the discussion.

You can't find all the offenders, anyway. It will take too much time, for a free service, and you risk catching some innocent people, or people in the grey area. This will make other users rebel against the moderators or the site, or conclude that this site arbitrarily deletes valid comments, and not invest more time participating. To avoid this problem, take action only against users who have unambiguously crossed the line.

Finding the worst offenders shouldn't be hard. Go by the number of times their comment was flagged, or the number of people who've blocked them, or the number of Youtube channels from which they've been blocked, or people having tons of downvotes without a corresponding number of upvotes, like 40: 1. Or just do a search for people using abusive language like "fuck you" or "asshole", sorted by the number of times each person used such language.

To be clear, using offensive tone is not the only unacceptable behavior. Doxxing people, physically threatening them, advocating violence against individuals or groups, and so on.

Human review is needed before blocking anyone, to prevent mistakes. And human review is feasible because the goal is not to catch every mis-behaving user, just the worst ones.

When you delete someone's account for abuse, replace all their comments with a note saying "This person's account was deleted because they were offensive". If their comment remains on the site, it may provoke more responses in the same tone, or otherwise convey that this kind of behavior is accepted on this site, leading to a downward spiral. That's not a worth risk taking to preserve some acceptable comments made by a known bad actor. Better to delete all their comments.

When a misbehaving user's account is deleted, replace all this comments with a note saying that this person's account was deleted for being offensive. Not "this comment", but "this person's account". That tells everyone else that certain behavior isn't tolerated and will lead the person to losing their account, not just an individual comment being deleted.

I think this strategy will significantly and quickly clean up a site with toxic comments, which in turn attracts more well-intentioned and well-behaving users, leading to a positive spiral, like Hacker News.

[1] I'm not sure about replies to replies, but direct replies to an incendiary comment are often not constructive to people not involved in the argument, so can be deleted.

Redesigning Our Education System for Lifelong Learning

Our education system dates from hundreds of years ago, and hasn't been updated for today's world. So what aspects of it need change?

Instead of offering a degree consisting of different courses, offer courses individually. Some students will spend years going deep. Others take a few courses and then leave college and take up a job. And perhaps return later to study more, either along with their job, or taking time off. Let students choose which path to take.

The concept of a four-year degree is stupid. No omniscient bureaucratic knows how much training is right for every single student. Besides, in today's world, you need to keep learning throughout your career, anyway. Given that reality, why cram in everything at one point? It may not be needed in practice. For example, I learnt compilers in college and never having built a compiler (or parser or formal grammar) in nine years of working. Even if I did need to build one today, I've long forgotten most of my compiler course. You'll get more value for the time spent if you learn things when needed.

What you learn should depend on your interests. Forcing everyone to go through the same courses leaves no room for individual tests and passion, reducing eager students to cogs in the bureaucratic machine.

Letting students decide what courses they want to take will also result in them choosing courses that are useful to them in the real world. Like how to build web or mobile apps. Or how to design and code securely, which is applicable across domains. Or the practical aspects of how software teams work, like version control, or code review, or using a bug tracker. Or how to do product and UX design for an app.

Instead, students are forced to learn stuff with niche applications in the real world, like formal specification and verification of programs, or functional programming in Haskell. Education should first teach people what they need for a job, the basics. Then go to the advanced, niche or academic stuff. Education works the other way around, which is topsy-turvy. Imagine you were teaching someone to drive a car, and instead of teaching them how to park or signal, you teach them how to evade someone chasing them, or how to drive at high speed in reverse. That would be a crazy driving school, but that's exactly how education is today.

Even when colleges do teach a relevant topic, they're outdated. For example, I was caught CVS in college, when Subversion had already taken off. Besides, we committed to version control individually, which defeats a big point of a version control system. We should have been asked to work together in say 6-person teams for a quarter, so we understand how not to step on each other's toes. Which goes beyond version control, by the way. How do you communicate? Co-ordinate? Track progress and bugs? And so on?

Letting students decide what courses to take fixes many of these problems. They'll learn practical, useful stuff. Professors will then have to compete for students, which is a useful real-world check to ensure they don't go off and do things nobody cares about. For example, I had a prof who liked type systems, and would say things like if you have

struct {
    int a;
    char b;

and the set of integers is represented by A, and the set of characters by B, then the set of values of this struct is represented by AxB. Which is all fine, but how does this help me write better programs, I asked him? He had no answer. He preferred things that are formal to those that are useful, as Paul Graham put it.

When professors have to compete to get students to attend their course, it will act as a safety check against this kind of intellectual masturbation. They could also be paid per student attending their course, rather than a fixed monthly salary.

Next, get rid of the difference between a degree, a diploma or a certificate. A three-week course should be treated the same from a regulatory perspective as a three-year course. Why have all these bureaucratic classifications? As long as you're learning useful things, both in the sense of getting a good job in the short term and in the sense of equipping yourself for your career in the long term, who cares what it's called?

This applies to schools as well. Instead of awarding someone a certificate saying he's passed tenth class, a student should be able to take only 10th class maths, pass, and earn a certificate saying he's passed 10th class maths. He can then take 11th class maths without completing the other subjects in 10th class.

Or even skip 10th class maths and then take 11th class maths. Maybe the student has learnt informally, such as online, or from books, rather than through the bureaucratic education system. If he passes the 11th class maths exam, he's proven himself, so award him a certificate saying he's passed 11th class maths.

Get rid of the rigid differences between a school, junior college, undergrad and post-grad.

Get rid of attendance as well, which is a dumb concept: it selects people who turn up to class but pay no attention, while penalising people who learnt more by themselves. Attendance also assumes that the teacher has turned up to class in the first place, or that that teacher knows something, which is unfortunately not always the case. Even at a better than average college in India, I once had a lecturer consult with me on something because I knew more. The whole concept of me attending his class was a farce, when it could have been him attending my class. Getting rid of attendance as a requirement fixes these problems.

There should be no age limits as well, except for risky courses like learning to fly a plane.

Anyone should be able to take the exam at the end of the course by just walking in to an exam hall. Get rid of the concept of enrollment, that there's a list of students taking a course. You should be able to just walk to an exam hall, like you walk in to the cinema. If you want to make sure there's a seat for you, reserve one a month in advance, like buying a train ticket instead of just turning up at the railway station.

Let anyone conduct exams, which could be different from the entity that's conducting the course. And let the exam-conductors compete: if companies offer jobs based on an exam, or a top college accepts a PhD student based on the exam, that must be because the exam is doing a good job of evaluating students. Then the exam will acquire a reputation amongst companies, students and parents. Let the exam work to prove itself, rather than being a meaningless piece of paper issued by a bureaucracy.

An entity conducting an exam can also strike a deal with a company so that people who pass the exam automatically get job offers from the company. Or the top rankers automatically get a job offer. In an ideal world, companies shouldn't have to conduct their own tests and interviews. It's redundant to have two evaluations of the same person. If a company is conducing their own tests, that's a sign that the one conducted by the college isn't testing the right aspects. Let's have more integration.

Exams can be written, multiple-choice, taken on a computer in the exam centre, taken on a computer at home, a lab, a panel discussion, pair programming with an evaluator, a panel discussion, an interview, or any other form. It's all up to the party conducting the exam. Why should some bureaucrat assume they know how best to test students than the people actually doing it?

People should be able to legally work once they're 13, in jobs that are safe, like a clerk, not in a construction site or steel plant. If there's a concern that kids will be strained, limit working hours to 5 hours a day, 5 days a week. Maybe kids should be allowed to work only with parents' approval, as a further safeguard.

For low-skill jobs, like working as a clerk, a decade of education (4-13) should be more than enough. Society needs a lot of low-skill workers. We can't wish that fact away. Why waste these people's time and life making them learn things they don't need? If the education system can't teach students what they need to learn in a decade, which is a long time, then it's the system that is broken.

If someone does continue education after 13, they shouldn't have any compulsory courses imposed, either by the college or the government. The decade before 13 is plenty of time to teach them what they need. Beyond that, they should be free to learn what they want.

Another problem with educational institutions today is that students don't know whether they're getting a good deal for the money they pay. Many teachers don't show up to class, or have no clue, simply reading aloud from the textbook, which students can do by themselves just as well. Even if they're competent, are they benefitting students more than the fees they charge? Given the astronomical cost of education, in India and the US and other countries, parents sacrifice a lot for their children's education. This makes it critical that they get they money's worth. Then there's the concern that people who can't pay are excluded, so the poor's children remain poor, which is unacceptable.

We can solve all these problems by first requiring education institutions to post data about students' incomes before and after graduation. What's the median income after completing the course? What's the median increase in income after completing the course, compared to their income before? How many students earned more? And so on.

Every educational institution must guarantee a certain salary after graduation. People who don't make this much money [1] automatically have their fees waived [2]. If someone does make more than the threshold, they pay a fixed percentage of their income, say 10%, as fees every year [3].

The threshold keeps getting increased for inflation every year. A decade after the course finishes, you're no longer required to pay any fees [4].

Fees are not due until the course completes, and even then are only a percentage of one's income, so educational institutions will no longer have an incentive to turn away poor people. There will be only one fee, not a separate lab fee, exam fee, entrance fee or toilet fee, which is a rip-off.

There will no longer be any need for accreditation, which is bureaucratic, and dysfunctional, anyway. Institutes that don't equip students with useful skills, deliberately or not, will suffer the risk of not getting paid, not the students.

Let educational institutions compete on the guaranteed income and the fee percentage. One institute can guarantee a salary of ₹5 lac for a fee of 10%, and another can guarantee a fee of 6 lac for a fee of 15%, presumably because they're offering better training or are doing a better job of selecting intelligent students. A student who receives both these offers can decide, along with her parents, whether to pay more for a better job. And it's a risk-free decision — if she doesn't end up making 6 lac a month, she doesn't pay for the training. There's no downside in taking the better offer. And since the fee is paid only out of the student's future earnings, taking the best offer won't strain the family finances.

A third institute can guarantee a salary of ₹5 lac for a fee of 15%. This would appeal only to students who didn't get a better offer, say from the first two institutes.

Let educational institutions compete [5] by being better at their job. And let them have a profit motive. There's nothing wrong in earning a profit while serving students and society. Earning profit doesn't somehow make it less effective. If anything, it's likely to make it more effective, as long as the incentives are aligned.

Liberalise the system so that foreign educational institutions can set up shop in India, including repatriating profits back to their home countries. That's a necessary part of any business.

[1] Yes, it requires tracking everyone's income. Perhaps students will be able to opt out of this tracking and instead pay whatever the institute asks, ahead of time.

[2] Maybe it can be based on the median income of the class: if it's less than X, nobody pays. If it's more than X, everybody pays. This reduces the misaligned incentive of students wanting to take low-paying jobs to avoid paying fees, or understating their income.

Another solution to this incentive problem is to ensure that an increased salary always corresponds to an increased take-home pay. If the threshold is 5 lac, and someone makes exactly 5 lac, and the fee is 10%, they'll be left with 4.5 lac. This incentivises them to understate their income, to 4.9 lac, say. To fix this, if the threshold is 5 lac, the fee shouldn't reduce their take-home pay to below least 5 lac.

So, if an institute guarantees a salary of 5 lac, and the student makes at least 5 lac, the fees will be: min(10% of income, income - 5 lac).

This is the same reason for income tax slabs being defined so that if you fall in the 30% slab, it's 30% of your income above 10 lac, not 30% of your total income, which would make people unhappy when they get a raise from 9.5 lac to 10 lac.

[3] Not a fixed fee, to align the educational institute's incentive with maximising students' earnings, thus maximising societal good.

[4] One's success more than a decade after a finishing a course is more likely to be due to one's own effort than the course.

[5] Competition is effective only when students can easily compare two offers to see how much they're paying. So, all institutions should have the same pricing formula: If you're income is above X, you pay Y% of your income for a decade after the course finishes. Institutions can change X and Y, but not other factors, such as asking for a fixed fee of 5 lac, say, or changing the payment period to 5 years or 15 years. This makes it easy to compare offers and pick the best one.

26 Mar 2017

Progamming Languages Should be Built Up Using Zero-Overhead Abstractions

How do you design a programming language?

You can do it in an unprincipled way, by taking a popular language and adding features to it without an underlying logic to exactly what features you're adding, and why, and why not other features. And why the features you've added took the form they did. This can never produce languages as good as a principled approach can, where everything that's there is there for a justifiable reason, and it all fits together. Great design feels inevitable.

So, what's a principled approach to language design? One is to build the language up using zero-overhead abstractions. As Stroustrup defines them:
What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better.
An example is Java: it has both lists, which are resizable, and arrays, which aren't [1]. Using a list requires an additional memory allocation (for the list object itself), garbage-collection, and dereference on each access. Further, lists can't directly store primitive types like ints — you have to box each int into an Integer object.

So, lists impose extra costs over using arrays. But if you don't need the benefits of a list, you don't need to pay the cost — you can use a plain array. That's an example of the zero-overhead principle.

The zero-overhead approach to language design is to carefully identify these zero-overhead abstractions, and build the language up using them [2].

Why is this important? In one word: performance.


For a while, people like Paul Graham thought that the ever-increasing power of computers will mean that we'll be able to use less and less efficient abstractions as time goes by. But then came the cloud, and you had to pay for server costs. I've heard it asked at Google, paraphrasing: "Do you want to use C++ and run 100 servers, use Java and run 200, or use Python and run 2000? Your time is not as costly as you may think." [3]

After the cloud came mobile, which have weak CPUs and GPUs, and limited memory. Storage is limited, so your binary size matters. Even the power the device does have can't be used continuously, because that will deplete the battery. And the network is costly, with few users having unlimited plans, so you should use it judiciously.

Over the years, smartphones became more powerful, and some of these constraints eased, but then came smartwatches, which are far more resource- and battery- constrained.

As one category of device becomes more powerful, newer, less powerful devices are entering the scene. Like AirPods, Apple's wireless earphones, which support interactions like tapping and Siri. Imagine if, next year, they could run third-party apps. They will be even more resource-constrained than smartwatches. The AirPods run for only 4-5 hours, compared to phones or smartwatches, which run the better part of the day.

With IoT, you may have a smart key that's expected to last for a year.

The point is that as technology improves and one class of device becomes more powerful, we'll end up building newer, even more constrained devices. We'll never have a situation where performance or battery optimisation doesn't matter. In the sense that if your language is inefficient, you're ceding territory to other languages.

Then there's processor speed. Single-threaded performance has been increasingly only slowly for almost a decade. We don't have 30 Ghz CPUs. If you were using Python in 2003 assuming that in a few years, it will be as fast as C++ (was in 2003), that didn't happen. As single-threaded performance increases slowly, but multi-core performance continued increasing quickly, if you want to take advantage of the hardware, you have to multithread your app, which is a significant overhead for you to design, code and debug.

Now, with Moore's law slowing down, even multithreaded performance is going to increase slower than it has in the past. Which means we need languages that make more efficient use of the resources we do have.

Another factor is that as hundreds of millions of people in emerging markets like India come online, they're buying cheap phones. They'll use only apps that run well on those phones, which is again an argument for efficiency.

For all these reasons, performance is important in programming language design, and will remain so for the foreseeable future.

A List of Zero-overhead Abstractions

So, if performance is important, what are some zero-overhead abstractions that give us power without hurting performance?

One is type annotations, also known as static typing. These let the compiler generate [4] more efficient code [5].

Second, value types. These don't require dynamic allocation, deallocation [6] and indirection on every access. Languages like Ruby where even integers are heap objects fail this criterion, compared to Java, say.

Value types are more than things like numbers. They include structs. In a language like Swift, you can bundle multiple variables into a struct without the overhead of putting it in the heap [7]. Unlike Java, where objects have to live on the heap [8]. Similarly, an object that's a field of another object requires two allocations in Java, while a struct that's a member of struct in Swift or C doesn't require even one heap allocation.

Arrays are another example. A Java array of 100 objects in Java requires 101 allocations, whereas in C++, say, you can have only one allocation. Similarly, a multidimensional array in Java requires a separate allocation for each of the inner arrays. Constant-size arrays, like int[100], are another case, which can be allocated inline, without dynamic memory allocation, not even one.

Heap allocation is not the only overhead with Java objects. They have runtime type information, which allows things like instanceof to work. You should be able to omit when you don't intend to do something dynamic with it, again like structs in Swift [9].

Objects in dynamically typed languages like Javascript have even more overhead, because you can dynamically add and remove fields from it. Which is again fine when you want it, but you should be able to opt out of it when you don't, for performance. If you can't opt out, the language isn't zero-overhead.

Templates, also known as generics, are another zero-overhead abstraction. If you want to write code that works with different data types, you have two choices: using virtual methods to dispatch at runtime, and templates to do it at compile-time, which makes it a zero-overhead abstraction.

Importantly, Java generics aren't a zero-overhead abstraction, because dispatch is still dynamic, with the overhead that entails. By contrast, templates in C++, Swift [10] or Rust are zero-overhead.

Raw pointers are an important zero-overhead abstraction. Swift, for example, uses ref-counting, but it also supports unsafe pointers for situations in which you don't want the overhead of ref-counting [11].

Likewise, Rust supports multiple kinds of references, like ref-counting, unique references and borrowed references, but there are situations when all of them impose runtime overhead [12]. So Rust supports raw pointers as well.

This makes Swift and Rust zero-overhead, like C++ [13] [14].

Related to raw pointers is skipping array bounds-checking, in those few critical sections where it matters, as in Rust [15].

An important abstraction is functions. A language like Ruby that dynamically dispatches all calls is obviously not zero-overhead. You need static dispatch [16]. Overloading is zero-overhead, as are default arguments, and keyword arguments.

Varargs as implemented in Java, say, are also zero-overhead because you don't suffer the overhead when you don't have varargs, and the compiler-generated code is no slower than what you'd write.

Another example is properties, as in Swift, which are things that look like instance variables but are actually methods. Since properties are just syntactic sugar, they're zero-overhead abstractions.

Exception-handling can also be zero-cost, as it is in Objective-C [17].

Reflection can also be a zero-cost abstraction, in that while reflective calls may be slow, it shouldn't slow down normal calls.

The opposite of reflection is forwarding, a way for an object to forward calls to another object in a generic way, without having to implement each method separately. In Objective-C, for example, your class can define a method forwardInvocation. When someone calls a method on the object that isn't defined, before generating an error, the runtime invokes forwardInvocation, giving you a chance to deal with the call, usually by forwarding it to another object. Put differently, it treats the method name dynamically as just another argument. Forwarding is also a zero-overhead abstraction, since it doesn't slow down normal method calls.

A language can also support lazy functions, via a lazy keyword. These which execute only when the return value is accessed. A lazy function that returns a Foo is effectively a normal function that returns a Callable<Foo>. This has a single get() method that returns a Foo [18]. Have the complier implicitly call this get() method when needed. With this transformation, lazy evaluation is just syntactic sugar, and imposes no overhead when not used, making it a zero-overhead abstraction.


Languages should be designed in a principled way, and not by gluing together features without having a strong reason why those features were chosen and not others. When you decide what abstractions your language should have, first choose zero-overhead abstractions, so that the language is powerful and efficient at once. Abstractions that aren't zero-overhead make you choose between programmer productivity and efficiency, but if you can have both at once, that's the best choice.

In addition to new languages, you can retrofit zero-overhead abstractions onto existing languages, say to increase performance. For example, Java could choose to support structs in the next version. This increases performance without hurting productivity. Alternatively, Java could choose to add support for lazy functions, which increases power without hurting performance.

As another example, a dynamically-typed language like Ruby could support objects that have a fixed set of fields at allocation time, rather than dynamically adding and removing them.

A lazy language like Haskell could support an eager keyword for eager evaluation, in those hot spots where lazy evaluation imposes significant overhead.

These are all ways to retrofit zero-overhead abstractions onto existing languages, which increase performance, power or both at once.

[1] In Java, arrays are part of the language, while lists are implemented in the standard library, but even if lists are included in the language in a future version, it wouldn't make a difference to the zero-overhead nature of Java.

Conversely, arrays could have been part of the library rather than the language. As in Python. Again, a Java-like language with this design choice would still be zero-overhead, because you can use arrays when you want them, for greater performance.

Only if a language didn’t offer fixed-size arrays at all would it fail to be a zero-overhead language.

[2] You can and should keep the API of both arrays and lists as similar as possible. For example, in Java, array.length vs list.size(). They’re named differently, and one is a property while the other isn’t. This isn’t ideal.

[3] If you're thinking that most of us don't operate at the scale of Google, that's true, but we have less money, as well, to spend on hosting. A single App Engine instance with 1GB memory costs ₹10,000 a month. If I use an inefficient language that requires another instance, I have to pay ₹10,000 more a month. And that's just for one app. I plan to build multiple, so the costs add up.

Not to mention that if my app isn’t successful enough, or successful in acquiring users but not in revenue, I have to pay the hosting bill out of my pocket. Since I’m an independent developer, living off my savings, as opposed to a company or being VC-backed, this may force me to shut the app down sooner. Whereas, if I’d built it in a performant language and didn’t have a significant hosting bill, I’ll be able to let it continue running for a while and iterate on it a few times to give it a chance to succeed.

[4] By definition: If someone comes up with a more efficient dispatch mechanism that doesn't use type information, statically typed code can use that too.

[5] Note that I said that the language must support type annotations, not that it should require them. An optionally-typed language like TypeScript is also zero-overhead, because you can always add a type declaration to eliminate the overhead of dynamic typing. But a language like Ruby that forbids type declarations doesn't meet the bar of being zero-overhead.

We can also have a language with an optimistic type checker, rather than the pessimistic ones we see in most statically typed languages. That is, we can have a language that lets us do:

Collection c = new ArrayList();

The get() method is defined in List, not in Collection, but a language could permit this code, perhaps with a warning rather than an error. Then, the declaration that c is a Collection ensures that we can call Collection methods on it without risking a runtime error, and do so more efficiently than in a dynamically-typed language. In other words it’s just an assertion that c instanceof Collection, not a straitjacket that prevents you from calling List methods on it.

Such a language would also be zero-overhead.

[6] Whether garbage-collection, reference-counting or explicit C-style dealloc() calls, it imposes an overhead.

[7] Even better is C++, which lets you allocate a primitive on the heap, if that’s what you need, without having to wrap it in an object, as in Java. Whether something is on the heap and whether it’s an aggregate consisting of multiple variables are logically unrelated questions.

[8] Escape analysis optimises some, but not all, of these cases. In other words, if a function f() allocates an ArrayList and passes it to g(), which passes it on to h(), which doesn’t pass a reference to it elsewhere, escape analysis should stack-allocate it. I don’t think Java, for example, can do that every time. If my supposition is correct, escape analysis isn’t a substitute for explicitly being able to allocate a struct on the stack.

Though one solution to this would be to add annotations to method parameters saying that it doesn’t escape from the function. That is:

int max(@noescape List<Integer> list);

declares that the list doesn’t escape out of the max() function, say to be stored in a long-lived object. This means that max() either doesn’t pass the list to any other function or, if it does, that function itself declares it as @noescape.

[9] Then there are enums, which are effectively structs that take less space. A type-safe language can have tagged unions, which is better than not having enums at all.

[10] Even Swift or Rust generics are limited. For example, in Swift:

func closeBoth<T>(a: T, b: T) {

... generates an error saying that T doesn't have a close() method. You need to say something like:

func closeBoth<T: Closeable>(a: T, b: T) {

... where Closeable is an interface that defines a close() method. But this excludes classes that don't implement Closeable, even if they have a close() method.

The above code works in C++:

template<typename T>
void closeBoth(T *a, T *b) {

This makes C++ templates more flexible, which means you can use them in more situations, without having to dispatch dynamically.

[11] Swift’s raw pointers are very inconvenient to use:

let p = UnsafeMutablePointer<Foo>.alloc(1)

compared to a normal reference:

let p = Foo()

But you can still use them, so Swift remains zero-overhead.

[12] You can’t easily use unique and borrowed references to implement a doubly-linked list, for example.

[13] C++ is more dangerous than Swift or Rust, because you end up using unsafe pointers pervasively, not just when the performance is needed.

[14] Which is not to say that every language should have unsafe pointers. Unsafe pointers clearly doesn’t belong in Javascript, since it runs in the browser. Other languages could also choose to be secure, like Java. Which is a fine decision to make; just that they’re not zero-overhead, then: they are ceding some use cases to other languages.

[15] Rust again does it better than C++, where you do unchecked accesses pervasively, rather than only in a few critical sections where it matters. The C++ approach makes programs insecure and unreliable.

[16] Again, the default can be virtual, as in Java, or not, as in C++. Both are zero-overhead since you can dispatch statically when it matters.

Even if Java didn’t have final methods, it would remain zero-overhead since it has static methods, which are dispatched without runtime overhead.

[17] Though, if you’re using a language / coding style in which exceptions are regularly thrown and caught, I don’t know if zero-overhead exceptions slow down the program overall.

[18] The get() method can be called multiple times, but executes the underlying computation only once.

21 Mar 2017

Extensible Numeric Types

Programming languages deal with arithmetic overflow in different ways. Some, like Python, automatically create an unlimited-size integer when an int overflows. Others, like Swift, signal an error. Java wraps around. In C++, this is undefined, which means anything can happen, such as a random result, a crash, or even other variables becoming corrupted. These four languages handle overflow in four different ways!

Which of these behaviors should a language adopt?

The fallacy is in assuming that there should be only one. Swift, for example, normally signals an error on overflow, but offers an alternative &+ operator that wraps around. We can take this one step further, and offer several addition functions [1]:

- add(), which throws an exception on overflow.
- add_promote(), which returns an int one size bigger than its arguments, like int64 if the arguments are int32 [1].
- add_wrap_around()
- add_unsafe(), which is like C++, where overflow is results in undefined behavior, which is C++-speak for "anything can happen".

That way, you can use whichever type of addition is appropriate for the task at hand.

Once you have different kinds of addition, you can define + as an alias for add(), which is a safe default. When you need performance, you can always use another addition method. Even in performance-sensitive code, only a small fraction of the code accounts for most of the execution time, so there's no reason to sacrifice safety globally.

Some of these addition functions may be implemented using others, but that's an implementation detail.

You could also define a new type of addition. For example, if I'm processing images, I wouldn't want a pixel value to wrap around and have a bright pixel in the sky become dark. I would instead want the value to get saturated at maximum, like 255. I could implement that defining an add_saturate() function. That's the benefit of getting out of the straitjacket of thinking there must be only one kind of addition operator.

T add_saturate(T a, T b) {
  try {
    return a + b;
  } catch (OverflowException e) {
    return T.maxValue;

With generics, your definition works for all types for which addition is defined.

This is exensible not just to new types of addition, but to new number types as well. Suppose you're doing scientific calculations, and you want to track floating-point error bounds in addition to the result, like 9 ± 0.1. You could define a new data type for this, and implement arithmetic operators for it. For example, you could consider two ranges to be equal if they overlap. For less than comparisons, you could choose not to implement <, instead defining two functions definitely_less_than() and probably_less_than(). The former returns true if the max of the first range is less than the min of the second range, and the latter returns true if the mean of the first range is less than the mean of the second range.

A financial program may use a decimal integer to accurately record paise and cents. Other programs may want rational numbers. Or complex numbers. Again, with the ability to overload operators, so that everything works as expected [3].

For custom numeric types to fit well into the language, we also need implicit promotions as in C++, like converting a double to a complex [4]. And conversely, if you have an unsigned_double type that's just a double with an assertion that it's >= 0, you should be able to implicitly convert it to a normal double.

Conversions should be implicit only when there's no loss of accuracy. Implicitly converting a rational to a double is probably a bad idea since you lose accuracy: converting 1/3 to a double and then multiplying it by 3 may not result in 1, exactly.

To facilitate custom numeric types, the language should also let us define data types, without a heap allocation, or other overhead like runtime type information that primitive types don't have. Performance considerations aside, custom numeric types will be natural to use only if they have the same syntax and semantics (like pass by value) that primitive types have [5].

[1] Named functions are more readable than cryptic operators like &+ or #+. It's anybody's guess what that means.

[2] add_promote isn't defined on Int64, since there isn't a bigger type to promote to. This assumes a statically typed language.

[3] Since + is just an alias for a function named add(), when I say operator overloading, I really mean template specialisation — specialising the add() template for a particular type.

[4] Assuming a complex is a pair of doubles.

[5] If primitives are passed by reference, as in dynamically typed languages, so should custom numeric types. The point is consistency.

Four Levels of Productivity with Memory Management

Programming languages offer different ways of managing memory, trading off programmer productivity for performance [1]. Let's rank them in terms of productivity: which would let you build your software faster, and with fewer bugs to fix?

The easiest kind of memory management to use is garbage collection as in Java or Go.

Slightly less productive is Swift's automatic reference-counting [2]. This maintains a reference count for each object, and when the count reaches zero, the object is deallocated. You still have to worry about reference cycles, by annotating one of the references as weak. Weak references don't prevent the deallocation of the referred-to object. Instead, they become null when the target is deallocated. Swift offers a second kind of weak reference, unowned references, which are just weak references followed by a "not null" runtime check at each point of access [3].

All this has a much steeper learning curve than garbage-collection.

Even after you're comfortable with it and used it for years, as I have, ref-counting keeps resulting in occasional bugs. I recently had a crash where I registered a callback for the system to notify me when the user's location changed. But the callback object was getting deallocated by then, because the system had a weak reference to it, not a strong reference, causing a crash when it was invoked [4]. You can also make the opposite mistake, ending up with a strong reference cycle, causing a memory leak. These are the two problems garbage-collection doesn't have.

In addition to the learning curve, experienced developers aren't immune to making mistakes, so automatic reference-counting is less productive to program in than garbage-collection.

The next lower level of abstraction is something like Rust, which has Swift-style reference counts, but also other kinds of references: thread-unsafe reference-counts, unique references and temporary references. These are merely special cases of Swift-style reference counts, for performance, so choosing between them is an overhead you don't have in Swift.

Again, there are two aspect to the complexity: the learning curve, and the continual overhead of thinking about and choosing the right reference. This makes Rust's memory management less productive than Swift's.

Even less productive than Swift is something like C or C++, where it's much easier to accidentally make a whole range of mistakes all the higher levels of abstraction above prevent: use after free, double-freeing, freeing an object using a pointer of the wrong type [5], freeing an array the wrong way [6], and leaking memory. That's a whole host of failure modes the more abstracted memory management systems above don't suffer from [7].

Ranking these from most to least productive, the most productive is garbage-collection. Slightly less productive is Swift-style automatic reference counting. Slighting less productive is Rust, with four different kinds of references you must choose from, which are all special cases of ref-counting. The least productive way is the C++ way, which exposes you to a whole range of runtime crashes and bugs.

Use the most productive abstraction that works for your use case. If not, you're wasting your time, delaying your product, and risking losing in the market.

For example, for a server, I'd pick a garbage-collected language over a ref-counted one, everything else being equal.

[1] Which includes responsiveness and peak memory required.

[2] I'm ignoring languages that require you to manually insert calls to increment and decrement the reference-count, since there's no reason to do it manually what the compiler can, without bugs.

[3] As far as I understand, Swift doesn't let you recover from such errors. This is a problem when you're invoking a third-party library, or want to contain failures. For example, I'm building a camera app, and one of the things a camera app must do is geo-tag photos. In Java, I can do:

try {
} catch (Throwable t) {

I can't do this in Swift since I can't catch the nil dereference as an exception and keep going. The app crashes, which is the last thing the user wants at that point. That's a bad failure mode, the opposite of graceful degradation. A camera app that can't geotag the photo should at least be able to save the photo without geotagging it.

As another example, if you have a server that encounters a bug while servicing a request, you'd want that request to fail, rather than bring down the entire server. Again, Swift doesn't let you do that.

[4] Which is still better than C++, where you can corrupt memory and keep running. Swift crashes immediately and cleanly.

[5] Unless you have a virtual destructor.

[6] In C++, if you allocate an array, you should deallocate it using the delete[] operator, not the normal delete operator. Not doing so could corrupt memory.

[7] C++ does have ref-counted pointers, but they're painful to use and produce less readable code:

shared_ptr<Foo> p(new Foo);

... compared to plain pointers:

Foo *p = new Foo;

C++ tutorials also start with teaching people to use plain pointers.

Swift and Rust make the opposite choice, by making safe references the default type. If anything, using an unsafe pointer in Swift is harder:

let p = UnsafeMutablePointer<Foo>.alloc(1)

than using a ref-counted pointer:

let p = Foo()

Just because both C++ and Swift have both safe and unsafe pointers doesn't make them equivalent. The important point is that Swift chooses a more productive default than C++. Safe references in Swift are easier to learn and produce more readable code, and beginners start with them as opposed to unsafe references.

Defaults are powerful. Most people stick with the default, which makes C++ memory management a lower level of abstraction than Swift's. Even if you use a ref-counted pointer in C++, a library you use may not, so you're back at the lower level of abstraction.

12 Mar 2017

Languages Should Let Us Control Side-effects

The thing I worry about the most in my programs is state — the values of variables and objects. Do they have the right values? Are impossible values prohibited? Are the state transitions correct? I've found it prudent to stop and think about how to structure my program to minimise the chance of objects getting into invalid states.

We can use help from the language, with support for immutable classes, and controlling side-effects. 

Immutable Classes

Immutable objects help reduce unintended side-effects, especially when working with a third-party library, especially a closed-source one. But immutable objects are very hard to implement in languages that don't natively support them.

For example, do you think this Java class is immutable?

class Person {
    private int age;
    static Person lastCreatedPerson = null;

    Person(int age) {
        lastCreatedPerson = this;
        this.age = age;

    int getAge() {
        return age;

This isn't immutable, because a reference to the object can be accessed by another thread before the field is initialised. So let's fix that, by eliminating the static variable, and make the field final too, to be safe:

class Person {
    private final int age;

    Person(int age) {
        this.age = age;

    int getAge() {
        return age;

Is this immutable? Again, no:

class MischievousPerson extends Person {
    public int age2;

    @Override int getAge() {
        return age2;

Since the field is now public, the object's state can be changed. Suppose we fix this by making the class final. Consider this example:

final class Person {
    private final List<Person> friends;

    Person(List<Person> friends) {
        this.friends = friends;

    List<Person> getFriends() {
        return friends;

Is this immutable? No, because you could pass in a reference to a mutable list:

List<Person> friends = ...;
Person kartick = new Person(friends);

So let's say we change the constructor to make a defensive copy:

Person(List<Person> friends) {
    this.friends = new ArrayList<>(friends);

Is this now immutable? No:

Person kartick = ...;

See how hard it is to define an immutable class?

So languages should natively support immutable classes, perhaps via an immutable keyword:

immutable class Person {

The compiler will do whatever it takes to make the class immutable. It would make the class final. It will make all fields final, and if they're of class type, ensure that they're themselves immutable. Or defensively copied, and that only side-effect free functions are invoked on them.

The compiler will make sure that objects remain immutable even in the presence of race conditions, that the state of the object won't appear to have changed when accessed by multiple threads without synchronisation. Maybe you haven't thought of multithreading. Or you don't understand the memory model of the language. Or don't know what a memory model is. The compiler will still guarantee that an immutable object is, in fact, immutable.

You don't have to be concerned about all these implementation details, just that it is immutable.

Even highly-skilled programmers make mistakes, so let's have the compiler do the hard work for us [1]. We humans can then focus on the high-level properties of our classes, not on the mechanics of their implementaiton.

Controlling Side Effects

Not all objects can be immutable, so we should think about how to control side effects [2]. A side effect is when you invoke a function that, rather than just returning a value, modifies something, like an argument, or the receiver [3].

You may think you can figure that out by looking at the implementation of a function, but it may call other functions, which call yet others, and so on, so it's hard to track whether an argument you pass in will be modified somewhere down the call tree.

To fix this problem, functions that modify parameters should annotate those parameters mutable, like in this example:

static void copy(mutating List destination, List source)

This says that the copy() method may modify the destination list, say by adding elements to it. You can safely pass any list as the source parameter, knowing it won't be modified [3]. No need to make defensive copies, which you can forget, causing bugs. Defensive copies also hurt performance. You shouldn't need to waste time making a copy if you can prove that it won't be modified, anyway.

The mutable keyword also applies to methods:

class Person {
  int getAge();
  mutable void setAge(int age);

This says that the setter modifies the Person object [4].

Parameters being immutable cascades up and down the call tree. The copy()method above can call only methods on source that are side-effect-free. Mutable functions and mutable methods go together — if the List class didn't annotate its methods as being mutable or not, the compiler won't be able to verify that the copy() function is conforming to its contract.

For clarity, the language could require the mutating keyword to be used at the call site, too:

List a = ...
List b = ...
copy(mutating b, a);

This eliminates the risk that you'll accidentally pass in an object without realising it can be modified, causing bugs.

This is a stronger guarantee than C++'s const, which you can const_cast away. Here, there'll be no const_cast, which would defeat the point.

This offers a stronger guarantee than Haskell, even, in a way. In Haskell, functions are supposedly side-effect-free, but you can use unsafePerformIO to cause side-effects, like modifying a variable. A Haskell function a() can call b() which calls c() which uses unsafePerformIO, and callers of a() won't know that a() can have a side-effect. Not in our language, which requires callers to be warned via the mutating keyword.


Tracking state is one of the hard parts of building software, and the most important, from the users' point of view, so our languages should evolve to let us control side effects.

[1] It's more helpful to have the compiler validate such high-level properties, as opposed to the administrivia that today's statically-typed languages distract us with.

[2] We don't need pure functions. It's okay for a function to do some computation based on the current time, for example. Or print something to the console. The thing we want to prevent is changing anything the values of variables and objects you have. Pure functions will be too high a barrier, with little benefit.

[3] The receiver is the object you're calling the method on.

[3] Parameters default to readonly, with mutable parameters requiring an annotation. This is perhaps better than the opposite, because people may not mark readonly parameters as such, which in turn forces their callers to make them mutable. The right thing should be the path of least resistance.

[4] Though any method name beginning with "set" may be automatically considered mutable, to reduce the overhead of declaring things mutable that are obviously mutable.

1 Mar 2017

Simplifying Uber's Confusing Pricing

I took two Uber rides today, each of which ended up costing me ₹36 per km. The normal rate for UberX, which I used, is only ₹8 per km. The trips were at 2x surge, so they should have cost me only ₹16 per km, not ₹36 per km. But Uber has a complex pricing formula that works in unintuitive ways and seems designed to confuse and cheat us.

First, UberX has a rate of ₹1 per minute in addition to the per-km rate. Instead of adding these two, Uber should have used the higher of the two. After all, you can't travel any distance without taking time. We don't have Star Trek transporters. So you shouldn't have to pay twice for the same thing. Use the max. That way, for a given trip, you're paying only once, which is fairer. If that means increasing the per-minute fee from ₹1 to ₹2, say, so be it.

Second, Uber charges a base fare of ₹40 for any ride, in addition to the above. Putting all this together, the formula for UberX [1] is

(₹40 ₹8 per km + ₹1 per minute) * surge

The base fare is needed because it doesn't make economic sense for a driver to drive 3-4 kilometers to pick me up, without me paying for it, and then drive me a short distance. The simpler solution to this problem is to get rid of the whole concept of a base fare and start the meter when the cab is assigned to you. That is, if the cab travels 4 km to your house, and you then take a 5 km trip, count it as a 9 km trip [2].

The current formula is, for UberX [3]:

(₹40 ₹8 per km + ₹1 per minute) * surge

Simplify it to:

max(₹8 per km, ₹2 per minute) * surge

On second thoughts, surge pricing shouldn't apply to the per-km fee, only to the per-minute fee. Applying a surge multiplier to the per-km fee ends up encouraging shorter distance trips. But the distance is irrelevant. What matters is the duration of the trip. Surge pricing happens when there's a shortage of cars. So people need to be incentivised to vacate a car as quickly as possible, so that someone else can use it. It doesn't matter whether, for the time they occupy the car, they travel 5 km or 10 [2]. So the fomula should be:

max(₹8 per km, ₹2 per minute * surge)

Then, the app can communicate surge pricing not as 2x but as "At least ₹4 per minute". That's more understandable. I can quickly estimate how much it will cost to go to a particular place.

[1] UberGo uses the same formula, but ₹7 per km instead of ₹8.

[2] Or impose a minimum fee rather than a base fee. That is, say that an Uber ride costs at least ₹60, for example, not that it costs ₹40 + the regular fare.

[3] Alternatively, surge pricing can be a higher minimum, like "At least ₹100 per trip". This will discourage shorter trips, reducing the time wasted in having a car come to someone's pickup point, therefore increasing the time the car is carrying a passenger. A number like "At least ₹100 per trip" is also more understandable than an opaque number like 2x.

Languages That Let You Assemble Software from Components

I was reading Fred Brooks's classic paper, No Silver Bullet, and the enthusiasm about component marketplaces was striking. In the late 80s and 90s, at the peak of the excitement about object-oriented programming, many people envisioned a new model of building software: instead of coding it yourself, you'd build it from pre-made modules obtained from a component marketplace.

This marketplace hasn't materialised. We still code a lot from scratch. Too much, in fact. Ideally, when you build a new app, you should be coding only the things that make your app different and unique. But, in reality, most of our time goes in building the basics that all apps must have. This doesn't add value to users or to the business. Put differently, software development is highly wasteful in time, money and opportunity cost. How many more or better services might we have had if we spent our time better? How much more could an independent app developer accomplish over his career if he didn't waste most of his time reinventing the wheel?

What can we do to realise these benefits? I think the key aspect is reducing the risk of integrating a third-party library into your app. Risk in privacy, security, reliability and correctness. If you had a language that guarantees that a third-party library can't cause your app to crash, or leak your private data, or cause bugs or security issues in the rest of your app, it would reduce the barrier to entry, hopefully letting you use a third-party component in a situation where you might not otherwise have.

Some communities, like Nodejs, have a different culture. They've embraced components wholeheartedly. You use some libraries, and they use others, and so on, and now you're shipping hundreds of packages from people you don't necessarily trust. The Node community doesn't need encouragement to adopt components. They already have. But the question still remains: can we reduce the risk of using random NPM packages from people you don't know or necessarily trust?

Even if you're not using third-party libraries, you can factor out some of your own code into an internal library for ease of understanding and maintainability. Or if a different person or team is responsible for that. The same techniques that let you confidently use third-party libraries can be applied within a company or project to help you write better software.

Suppose we were to design a language from scratch to reduce the barriers and the risks to assembling software from components. What form would this language take?

Memory safety

... prevents a third-party library from crashing your app, or corrupting your data. A library should be able to corrupt only its own data. This means, like Java, no raw pointers, garbage-collection, bounds-checked arrays, checked casts, compulsorily initialised references, and so on.

Memory safety is foundational for all other guarantees. If you don't have memory safety, all bets are off.

To verify these properties at runtime, libraries should be distributed as bytecode, not machine code. Or source code. In this post, when I say "bytecode", I mean "bytecode or source".

This doesn't apply to apps (things with a main() function). Apps could be distributed in binary, using an AOT compiler, like Ngen. Or they could be distributed in binary and compiled to native code during installation. Or JIT'ted. Any of these is fine. It doesn't matter how apps are distributed, since they're not reused; it matters that libraries are distributed as bytecode.


Even modern, memory-safe languages like Swift let third-party libraries crash your app, say if they dereference null. It's a controlled crash, not random memory corruption, but that still shouldn't be the case. If things go wrong in a third-party library, it should produce an exception, like in Java. Or, equivalently, return an error code. Not crash. Crashing should be a thing of the past, no matter how carelessly the library was coded.

No static variables

We should look at getting rid of static variables (including global and local static variables). Pass a reference explicitly. That way, a library can modify only objects it's given a reference to. You know at the point of invocation what objects can be affected.

You don't have to worry that a library modified some global state. Either corrupted it, or modified it in a sensible way, but so does your app or another library, stepping on each other's toes [0].

Side-effect-free functions

A language designed for assembly of software from pre-made components should support side-effect-free functions. These don't modify their arguments, or the object they're invoked on (this). A side-effect-free function can call only other side-effect-free functions.

You can confidently invoke a side-effect-free library function anywhere, and in any situation.

The language should let us mark functions as having a side effect. Better, it should let us annotate which argument a function modifies. Here's an example of a function that copies one List to another:

void copy(mutating List destination, List source)

This way, you know not just that a function modifies state, but which argument it modifies. In the absence of this keyword, functions are side-effect-free.

The mutating keyword also applies to methods:

class Person {
  int getAge();
  mutating void setAge(int age);

This lets you invoke library functions with confidence, knowing what objects might change as a result, and which won't.

Immutable Classes

Immutable objects help reduce unintended side-effects, especially when working with a third-party library built by someone else. But immutable objects are very hard to implement in languages that don't have support for them [2]. So the language should have support for immutable classes, perhaps via an immutable keyword that you'd use when declaring a class:

immutable class Person {

The compiler will do whatever it takes to make the class immutable. It would make the class final. It will make all fields final, and if they're of class type, ensure that they're themselves immutable. Or defensively copied, and only side-effect free functions are invoked on them. These are all implementation details. You don't have to be concerned about how to implement an immutable object, just that it is immutable.

The compiler will make sure that objects remain immutable even in the presence of race conditions, that the state of the object won't appear to have changed when accessed by multiple threads without synchronisation. Maybe you haven't thought of multithreading. Or you don't understand the memory model of the language. Or don't know what a memory model is. The compiler will still guarantee that an immutable object is, in fact, immutable.

Even highly-skilled programmers make mistakes, so it's good to have the compiler do the hard work for us [3], so that we can focus on the high-level properties of our classes, not on the mechanics of their implementaiton.


You can load a library into a sandbox, or invoke a function in a sandbox. Code in the sandbox won't be allowed to do I/O and invoke dangerous OS or standard library functions. Such calls would be guarded by a sentry, which is an object that inspects and allows or disallows them. The default sentry denies everything, but you can define your own sentry that has a different policy [4].

The compiler automatically makes defensive copies of everything going into or out of the sandbox, except for immutable classes [5]. This way, there are no references across the sandbox boundary, which defeats the point of the sandbox [6].

When a language is designed from scratch with sandboxing in mind, it can be implemented with zero or close to zero overhead than retrofitting it after the fact for a language like C++, as with Google's Native Client.

In addition to limiting what API calls sandboxed code can make, we could also limit the memory used by the sandbox. This would probably require the sandbox to have a separate heap.

The language could support timeouts for calls into the sandbox, to prevent the library running in the sandbox from hanging your app. Or use async/await, so that your main thread isn't unresponsive.

You could restrict background execution, which will let the sandbox run only when there's a pending call into it, and for a grace period of 5 seconds afterward. This prevents a third-party library from draining battery or slowing down your app when it's not being used.

If that's too strict, you could let it use create threads, but limit their priority, so that they don't interfere with higher priority threads, like the UI thread.

You could force libraries to use efficient abstractions like Grand Central Dispatch or goroutines instead of kernel threads. You can create tons of GCD tasks or goroutines without overloading the system, which may be especially important in constrained environments like phones. Tasks and goroutines also consume far less memory than a kernel thread.

With a sandbox, an ad library in a mobile app that happens to have contacts permission won't be able to abuse the app's permission to upload the user's contact list to the ad network's servers, for example. Or require it to make HTTPS calls, not HTTP, if that's what you want. Or hang your app, or make it unresponsive. Or consume too much memory and crash. And so on.

At a high level, when you sandbox a library, you can prevent it from hurting the privacy, security, reliability or performance of your app.

No extensions

Some languages like Swift let you modify classes, including system classes. This would be a mis-feature in a language designed to aid assembly of software from components. A library you include in your app shouldn't be able to modify classes it doesn't own, interfering with your app or other libraries.


A language designed to aid assembly of software from components should be portable, in many different ways.

First, code should be hardware-independent, like Java. A library should work on all devices, and in exactly the same way. It should be distributed as bytecode or source code, not machine code.

Second, languages like C leave some decisions to the compiler, like whether a char is signed or unsigned. This is different from hardware-dependence, because two compilers on the same hardware could implement things differently, making it harder for you from using a library built with one compiler in an app built with another compiler. This is again a mis-feature in a language designed for component reuse. Any two compilers should generate compatible code.

This means ABI stability as well. That mostly comes free when you distribute bytecode, but not always. For example, in Java, if you're using a third-party library, and it defines a public static final integer field that you use, the compiler inlines it. If the next version of the library changes the value of the constant, unless you recompile, your code is stuck with the old version, which can cause problems. This can cause bugs or even exceptions. This is a mis-feature of Java. Recompiling should never change the behavior of your app.


When we use a third-party library, the language should contain as many problems as possible, as opposed to risking the privacy, security, reliability or correctness of our software. Then, we'll be able to confidently use third-party libraries without knowing or caring as much who's built them. This frees us to implement only what's different and unique in our software, relying on premade components for things that are the same, letting us be more productive, both individually and for the industry as a whole.

[0] Eliminating static will be painful, requiring references to be passed around. Perhaps we can ease the pain by allowing dynamic scoping. Dynamic scoping is a way for a function to declare a local variable and have it be visible to all the functions it calls, recursively, without having to pass it explicitly. But dynamic scoping should be allowed only within the same module. A module is, as in Swift, a bunch of files that are compiled together, that don't make sense to distribute or use without the other files in it. Each library is a separate module, and your main app, another module. That way, you won't have the pain of passing around tons of references within your module, but at the same time, calls to a library explicitly identify what objects the library can modify.

[1] We don't need pure functions. It's okay for a function to do some computation based on the current time, for example. Or print something to the console. The thing we want to prevent is changing anything the values of variables and objects you have.

[2] For example, do you think this Java class is immutable?

class Person {
    private int age;
    static Person lastCreatedPerson = null;

    Person(int age) {
        lastCreatedPerson = this;
        this.age = age;

    int getAge() {
        return age;

This isn't immutable, because a reference to the object can be accessed by another thread before the field is initialised. So let's fix that, by eliminating the static variable, and make the field final too, to be safe:

class Person {
    private final int age;

    Person(int age) {
        this.age = age;

    int getAge() {
        return age;

Is this immutable? Again, no:

class MischievousPerson extends Person {
    public int age2;

    @Override int getAge() {
        return age2;

Since the field is now public, its state can be changed. Suppose we fix this by making the class final. Consider this example:

final class Person {
    private final List<Person> friends;

    Person(List<Person> friends) {
        this.friends = friends;

    List<Person> getFriends() {
        return friends;

Is this immutable? No, because you could pass in a reference to a mutable list:

List<Person> friends = ...;
Person kartick = new Person(friends);

So let's say we change the constructor to make a defensive copy:

Person(List<Person> friends) {
    this.friends = new ArrayList<>(friends);

Is this now immutable? No:

Person kartick = ...;

See how hard it is to define an immutable class?

[3] It's more helpful to have the compiler validate such high-level properties, as opposed to the administrivia that today's statically-typed languages distract us with.

[4] A sentry could theoretically re-implement calls, in addition to just allowing or denying them, like an in-memory filesystem.

[5] A class that's neither immutable nor copyable can't be passed into or out of the sandbox.

[6] You'll be able to add an annotation to tell the compiler not to make a defensive copy of an argument or a return value. Imagine an image-processing library. A 12-megapixel, 16-bits-per-channel bitmap is 72MB. We may not want to copy that in the name of sandboxing. The sandbox can't be too rigid. Defer to the programmer.