Are True Names in Earthsea digital encryption keys?

I’ve recently reread Earthsea, a classic fantasy novel by Ursula LeGuin. The novel is set in a magic world and tracks the progress of Ged, a mage with unprecedented powers, from his childhood to adulthood. Keeping in mind an often cited quote by another classic writer, Arthur C Clarke: “Any sufficiently advanced technology is indistinguishable from magic”, I decided to peel back the curtain of magic and imagine what practical inventions may stand behind the magical concepts of the Earthsea world. I’m most interested in the concept of true names. I will show how true names could relate to modern day digital encryption – with a mage’s true name being his personal digital certificate.

Some of the technology behind the magic in the book is trivially easy to see. For example, when Ged applied a spell to a brackish spring on a tiny sea island, it means he’s installed a filtration system. Water desalination was a plausible alternative, but, since it is more complex and bulky (using modern technology), it would be less likely.

There are many cases when figuring out the exact technical solution is impractical because it is described in magical terms. Now, any technology would look like magic to people who don’t understand it. They would simply have no concepts to explain it and wouldn’t know where to look for clues. So naturally people in medievalist societies where magic fantasies are usually set can’t provide adequate descriptions of advanced technology they mistake for magic. Hence the readers lack cues to accurately translate the magic to the modern day technical concepts. This is certainly by design.

With this, let’s get to true names, the most creative and crucially important device in this book. Many entities in the Earthsea universe have secret true names, which have special magic meaning. Here’s what we know about true names:

  • Humans, other live beings and geographical objects (seas and islands) have true names.
  • Each human has a unique true name.
  • True name is permanent and can’t be changed.
  • Animals have one true name for the whole specie.
  • Ged’s true name was assigned to him by a mage when he was near puberty. He didn’t have a true name before that. Many other people aged from 14 and up also have true names. No human person other than child Ged is specifically described as having no true name.
  • True names can be used by beings with magical powers (human mages and dragons) to control behavior of the name bearer

What we don’t know:

  • Do human-made objects (such as houses and boats) have true names? We know that spells are routinely applied to them, but it’s not clear if such spells require knowing the true name of the object.
  • Do all humans get their true names assigned by mages in their teen years? There is not enough information is Earthsea to draw conclusions (and I didn’t read sequels to Earthsea).

Based on what we know, true names must mean different things for humans, animals and inanimate objects. This would explain the fact that human true names are individual, but animals of the same specie share the same true name. We also know that there are several kinds of magic, taught by different Master Mages. It stands to reason that if true names unlock different kinds of magic, then the true names themselves may be of different types. Let’s start with animals, which seem to be the easiest case. For animals, the logical true name would be a DNA signature. The magic it enables is, then, biotech-based.

For humans, we could also go with personally targeted bio. One example of this is found in Vernor Vinge’s book Rainbows End. The book is mostly about near term advanced digital technology and augmented reality. At one point Alice Gu is incapacitated by a virus targeted to her individually. The creation of the virus required collection of her DNA sample.

There are other ways to individually target humans. The simplest one is an identifier issued by a competent authority, something like a Social Security Number in US. And it doesn’t have to be a number – it could be just a name. After all, only a few years after Earthsea, Ursula LeGuin uses this idea in her book Dispossessed – individual names are unique and assigned on planet-wide basis.

In this case, determining someone’s true name would be roughly equivalent to modern day identity theft. We know that identity theft enables the thief to take over the victim’s accounts, disrupting his ability to communicate and access to finances. It can be really demoralizing, which matches the effect of an adversary knowing your true name (rendering you unable to defend yourself). For this to work, the target has to be quite advanced and sophisticated: medieval subsistence farmers won’t suffer much from identity theft, because they don’t bank or use Skype. From the book, it’s not clear whether revealing of the true name is equally damaging to different people. We know that it has debilitating effect on mages, but they are a unique cast living by special rules. We have to assume that they are the super-sophisticated hi-tech elite of the Earthsea, so they could indeed suffer from identity theft.

Another possibility is a breach of anonymity. In one of the first Sci-Fi books of the digital era, True Names, Vernor Vinge shows how cyberspace character is in grave danger if his real world identity is exposed. Thus, true name is simply the real name in the physical world. In the True Names, the danger comes from the government, which is able to apply its heavy hand to coerce desired behavior in cyberspace. But “we know where you live” has also been an effective threat from non-government actors, such as mafia, KKK and jihadists. In the Internet era this threat has increased in poignancy and acquired a new name: doxing.

Finally, here’s the tantalizing possibility: true name could be a person’s private encryption key. Such a key is necessary to create digital signatures proving authenticity of communications. And it must be kept secret, otherwise an attacker could impersonate the victim. Again, this will only work against hi-tech targets dependent on electronic gadgets and communication services. If you are a mage deploying an array of electronic devices, sensors, weapons and so on, a lost private key means that your adversary can now impersonate you in all digital communications. He can issue commands that will appear to be coming from you. He can control all of your devices and turn your weapons against you, which must spell quick and immediate defeat. The threat is made more severe by the fact that in the Earthsea, the key (true name) can’t be changed. We can see the impact of this oversight in the book, where once someone’s true name is known, that person (or dragon) remains in danger forever. An ability to change keys is a fundamental tenet of modern crypto systems: if keys are stolen, changing the keys (think passwords) allows a quick closing of the security breach. Changing keys is so important that some security experts advise against biometric security (e.g. fingerprints or retina scans) because the secrets are unchangeable. Once your fingerprints make it into the wrong hands, you are as helpless as a wizard of Earthsea whose true name has been revealed.

If true names can be understood in terms of  digital encryption, then a book on the current FBI vs Apple encryption fight was written half a century ago.

Locating JIT-generated machine code using gdb

This article describes how I was able to locate machine instructions generated by OpenJDK JIT compiler on Linux using gdb debugger. It may be useful to those who want to understand what JIT does with your code.I used this procedure to find that JIT emits efficient branchless code, as described here. I decided to share this information because I spent considerable amount of time figuring it out. It’s possible that easier approaches exist – please feel free to suggest.

First, I used these 2 options with java command: -XX:+AggressiveOpts -XX:+PrintCompilation

AggressiveOpts forces JVM to compile most everything it sees into native instructions. PrintCompilation makes it report what it does compile. With these 2 options together, you have a great chance that your target function will be compiled and you get a log statement to confirm that it does.

Next, I found that it is best to place a breakpoint just before your target section of code. But how does one set a breakpoint? Java breakpoints are useless for native code debugging, because Java debuggers don’t give you an opportunity to jump from Java code into native instructions. A breakpoint in Java interpreter code in JVM would be triggered countless number of times while JVM interprets your Java program. I found this to be impractical even for tiny handcrafted sample programs. Answer: you create a simple NOOP native function and set breakpoint in it!

I took inspiration from Fahd’s core dumper code He describes how to invoke native code from Java to cause a core dump (by deliberately dereferencing null pointer). We can take this example, but eliminate the core dump. Our function will do absolutely nothing – it will simply serve as a hook to hang the debugger breakpoint on.

The steps will be:

1. Create a Java class:


public class Hookable {

// load the library

static {

System.loadLibrary("nativelib");

}

// native method declaration

public native void here();

public static void main(String[] args) {

// any prep work

new Hookable().here();

// Your target code goes here

}

}

2. Compile your class with javac


javac Hookable.java

3. Generate JNI header file


javah -jni Hookable

4. Write C code for the function that you will use a breakpoint in gdb:


#include "Hookable.h"

void foo() {

int a=2;

return;

}

JNIEXPORT void JNICALL Java_Hookable

(JNIEnv *env, jobject obj) {

foo();

}

5. Compile the C code


gcc -fPIC -o libnativelib.so -shared -I$JAVA_HOME/include/linux/ -I$JAVA_HOME/include/ Hookable.c

6. Launch gdb to runJava under its control:


gdb --args java -XX:+AggressiveOpts -XX:+PrintCompilation Hookable

When gdb starts, add a breakpoint for foo. gdb will complain, because Java is not yet running, so foo can’t be found yet. Confirm that you really want to add this breakpoint by responding “y” to the prompt:


(gdb) break foo

Function "foo" not defined.

Make breakpoint pending on future shared library load? (y or [n]) <b>y</b>

Breakpoint 1 (foo) pending.

Run Java:


(gdb) run

Starting program: /usr/bin/java -XX:+AggressiveOpts -XX:+PrintCompilation Hookable

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

[New Thread 0x7ffff7fce700 (LWP 7336)]

…

gdb will stop at the breakpoint in foo:

Breakpoint 1, 0x00007fffe48c96ff in foo ()

from /home/ivan2/dev/benchmarks/libnativelib.so

(gdb)

Now would be a good time to set up your debugging session. For example, tell gdb to display next machine instruction to be executed:


(gdb) display/i $pc

1: x/i $pc

=> 0x7fffe48c96ff <foo+4>: movl $0x2,-0x4(%rbp)

(gdb)

Exit foo and Java_Hookable, the 2 frames that exist solely as a marker and a place to hang breakpoint on


(gdb) fin

Run till exit from #0 0x00007fffe48c96ff in foo ()

(gdb) fin

Run till exit from #0 0x00007fffe48c9723 in Java_Hookable_here ()

Your debugging session is now stopped at the point marked “// Your target code goes here”. Next debugger step wioll take you out of native C code and back into Java realm, exactly where you want to be. Happy hunting for JIT output!

Confirmed: OpenJDK JIT compiler emits efficient conditional move machine codes

On popular programmers’ Q&A site stackoverflow.com the hit question of the last few years dealt with the effect of branching on program performance. I recommend that you read the brilliant explanation there, but if you don’t have much time, here’s the summary. Modern processors execute commands in a pipeline. While One command is finishing executing, the next one is already started, the next one is being prepared and so on. Pipelining can only work if processor knows which command will execute next. If the program is linear, determining next command is trivial. But if the program branches (think “if” statement), processor can only guess what comes next. If the processor guesses wrong, it would have to back out several steps of the wrong branch it took. Such backout would severely degrade performance.

Fortunately, modern processors have branch prediction mechanisms, which allow them to guess branching correctly when a particular branch exhibits consistent behavior. An example would be an if statement inside a loop. If code takes “then” branch consistently, processor will be able to make correct guess and performance will remain optimal.

So efficient code would avoid branching or at least make it branching predictable. One could assume that to write efficient code, programmer needs to be aware of branch prediction and code around it. An example in the linked Stack Overflow article shows how sorting data before processing it in a loop creates consistent branching and dramatically improves performance. Another trick would be to replace mini-branches with equivalent branchless code. For example, function min(a,b) is normally written as

if (a<b) return a; else return b; 

It could rewritten (for Java’s 32 bit signed ints, with an assumption on input range not causing overflow/underflow) as

return a-((b-a)>>31&1)*(a-b)

Should you write code in this obscure style to gain performance? I’m going to show that in Java this trick is unnecessary.

The key to the problem lies in the fact that in some cases, an if statement doesn’t produce branching! X86 processors (Intel and AMD) that power most PCs and great many servers have recognized Conditional Move instructions in Pentium Pro, released about 20 years ago in 1995. Conditional move instruction is a way to achieve branching effect without branching code. The meaning of this instruction is “move value from one place to another IF specified condition is met, do nothing otherwise”. This instruction can be used to determine the minimum of 2 values or to express a ternary operator (?:) in C or Java.

The Stack Overflow post cited benchmark results that suggest that C compilers are not consistent in emitting these efficient conditional move instructions. Java seemed to use conditional move, based on performance results. I ran a slightly different benchmark (computing minimum of 2 numbers using if-return and ternary operator), comparing run times between branch-friendly ordered input data and noisy random input data. There was no difference in execution times, suggesting that branching was not a factor.

I decided to check what instructions OpenJDK JIT compiler generates. For my tests I used Java 7 on my home Ubuntu Linux running on 64 bit AMD processor.


ivan2@bigbox:~$ java -version

java version "1.7.0_79"

OpenJDK Runtime Environment (IcedTea 2.5.5) (7u79-2.5.5-0ubuntu0.14.04.2)

OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)

I wrote a primitive test case with simple min function:


public int ifmin(int a, int b) {

if (a<b) return a; else return b;

}

My goal was to confirm that this simple code will be compiled to native instructions as a conditional move (CMOVxx), not a conditiona jump (Jxx). My primitive function compiled to this Java bytecode:


0: iload_0

1: iload_1

2: if_icmpge 7

5: iload_0

6: ireturn

7: iload_1

8: ireturn

I caused JVM to compile the bytecode to native instructions and traced execution with gdb debugger. By the way, locating compiled code took quite a bit of effort. I probably missed an obvious approach, but in any case, I explain how I pinpointed JIT-generated machine code in a separate post. I verified that compilation happened in a log file:


Compilation events (4 events):

Event: 0.077 Thread 0x00007f03bc0a9000 1 ivan.Code::ifmin (9 bytes)

I was then able to trace the execution of the function and see the actual machine instructions. The relevant section of gdb session looked like this:


=> 0x7fffed060190: sub $0x18,%rsp

=> 0x7fffed060197: mov %rbp,0x10(%rsp)

The instructions above populated registers in preparation for comparison.


=> 0x7fffed06019c: cmp %ecx,%edx

The above instruction compared a with b


=> 0x7fffed06019e: mov %ecx,%eax

And the above moved b to result


=> 0x7fffed0601a0: <b>cmovl</b> %edx,%eax

Next: conditional move if a<b. The logic of this steps is like this: prior to this step, result=b (we just moved b to result). Now, if a<b,result will be overwritten with a. So if a<b, then result=a. Other wise, result remains unchanged, meaning result=b. Just what we wanted!

Next, finalize and return – nothing terribly interesting


=> 0x7fffed0601a3: add $0x10,%rsp

=> 0x7fffed0601a7: pop %rbp

=> 0x7fffed0601a8: test %eax,0xaf96e52(%rip) # 0x7ffff7ff7000

=> 0x7fffed0601ae: retq

Similarly, I confirmed that ternary operator is compiled into conditional move instruction. This Java code:


int a = aarr[1];

int b = barr[2];

res[3] = a<b?a:b;

produced similar machine code sequence, including conditional move:


=> 0x7fffed05fc9c: cmp %ebp,%r11d

The above compares a with b


=> 0x7fffed05fc9f: <b>cmovl</b> %r11d,%ebp

And this lines overwrites one with the other – conditionally!!

Conclusion: for simple conditionals, modern JIT compilers emit efficient branchless code. No additional programming tricks are required. Clever interview trick of computing minimum of 2 numbers through obscure bitwise code should remain confined to interview rooms – it does nothing to improve performance in real applications.

Cumulative moving average to be put on Voyager spacecraft? Use Two Stage Rocket!

Cumulative moving average is a simple concept: you have a series of values (observations), you add them up one at a time, and divide by the number of items to arrive at the average. It is an online algorithm: cumulative average may be computed at any point in time taking into account all the values received up to this point.

It get only slightly more complicated if your data series is very long (if your algorithm should be able to work long enough to be put on the Voyager spacecraft). In this case, you can’t simply keep adding up numbers, because the sum will overflow. One possible solution to this problem described in the “Voyager” post is to keep average instead of sum. The downside of this approach is a loss of precision caused by repeated floating point operations with the average. I observed the loss of precision by feeding this averager numbers from 0 to 100 (average 50), repeated 1 million times. At the end of the run, computed average was off by about 1e-11. It is not a huge difference, but it is there. And int-based summing averager predictably overflowed (100 million values averaging 50 add up to 5 billion; int range in Java goes up to a little over 2 billion).

Fortunately, there is a better solution, which is also in keeping with the space theme: use two stage rocket! As first stage, we will use traditional averaging approach, keeping the sum of values. Our second stage will keep the sum of averages. After a predetermined number of observations, we will push our average from stage one to stage two and reset stage 1 to zero. This approach enables exponential increase of the number of values our averager can accept. And if we need to travel to another galactic, we could use 3-stage rocket!

When I fed the two stage averager the same test set of numbers as above (0 to 100, repeated 1 million times), the output was precisely 50. No loss of precision occurred.

Here’s the code:

public class TwoStageAverage {

   private int count2=0, count=0;
   private int sum=0;
   private double sum2=0.0d;
   private static int STAGE1LIMIT=0x10000;

   public void accept(int value) {
      sum += value;
      count++;
      if (count==STAGE1LIMIT) {
         // accept to stage 2
         sum2 += ((double)sum)/STAGE1LIMIT;
         count2++;
         // reset stage 1
         sum=0;
         count=0;
      }
   }

   public double getAverage() {
      return (sum2*STAGE1LIMIT+sum)/(count2*STAGE1LIMIT+count);
   }
}

Augmented Reality and other predictions in Rainbows End

With FAA giving approval to Amazon to test aerial drone deliveries, one of the major predictions in Vernor Vinge’s Sci-Fi novel “Rainbows End” is edging ever closer to being realized. What about the other predictions from this book?

Vinge is one of my favorite Sci Fi authors. He broke into professional writing in mid 80s, when space era of Sci Fi was fading out, being replaced by the computer age. Coming from hardcore computer background (professor of Computer Science), Vernor Vinge mercifully avoided the siren songs of cyberpunk with its wild descriptions of life in virtual reality, which are embarrassing to read a couple of decades hence. Instead, he gave us a beautiful and thoughtful book illustrating his published scientific thoughts on Artificial Intelligence in “A Fire Upon the Deep”. Much later, in 2006 came “Rainbows End”, set in Augmented Reality world of a near future, circa 2025. Now that half of the time to the book setting has passed, I decided to look at some of the ideas and gadgets described in that book because it does what the best Scientific Fiction has done. It not just entertains, but also educates about technology and warns about potential issues.

The most important technical inventions in the book go by acronyms SHE and YGBM. SHE stands for Secure Hardware Environment, a hardware element that is owned by the government and must be a part of all consumer electronics. It is the means by which the government intrudes into the everyday lives of people. Not much is known about what SHE actually does. Evidence of its efficacy (security) is not clear.

This sounds like modern day government surveillance programs, except so far they are not inside our consumer devices. We know that government is pushing for it, though, and industry is publicly resisting. Ominously, unlike Vinge’s fantasy, governments are not even offering a promise of security to sweeten the deal. I’d score this prediction a partial hit – still on track, but not there yet.

YGBM stands for “You Gotta Believe Me” – a combined bio-electronic technology that, well, makes the targeted person believe the operator’s claims. The book’s plot revolves around an attempt by a sinister group to secretly develop practical YGBM technology. YGBM definitely remains something out of the realm of fantasy. Any commercial efforts to improve online/mobile ad quality so far are, fortunately, in a different universe from YGBM. For the sake of completeness, I’d mention that Russian government has achieved spectacular results with nothing more than conventional propaganda tools. Fortunately, this prediction remains a miss.

Following these 2 big ideas, there is a cornucopia of technological advances. Most notable and critical to the books plot is the universal use of Augmented Reality delivered through contact lenses and smart clothing. Augmented Reality is simply an overlaying of computer-generated imagery on top of the real world view (other senses may be engaged as well, but vision is by far the most important simply because of the way we are built). One example of AR that would be familiar to the american audience is the yellow First Down line in the TV broadcasts of gridiron football. The line doesn’t exist on the football field but is clearly seen by the TV viewers. It is different from, say, scorebox, because it works as an integral part of the field view – it appears to stay in place as camera view moves. To give you one example of what AR could become, imagine you are in a business meeting with a group of people you meet for the first time – and each person’s name and title are neatly displayed above her head whenever you look at her. Dialing it up a notch: imagine a language translation program paired with AR system so that when you look at a text in foreign language, you see translation instead. In fact, this is not a fantasy – the is an app for that! – Word Lens. In the military domain, AR exists in the form of HUD (Heads Up Display).

For more ideas, you can check out Wikipedia or just read the Vinge’s book.

AR is a tantalizing idea because it makes sense and it seems the technology to deliver it is almost there. Google made a high profile AR push with Google Glass. Unfortunately, Google Glass teased us with promise, but so far has failed to deliver.

In my opinion, Google Glass failed because it was not easy enough to use, even though the idea of bringing AR to the public is compelling. I think the Glass’ problem was in inadequate input technology – speech recognition. As an output, glasses may have been workable. But constantly talking to your devise simply is not efficient. Critically, it is downright weird when done in public. Note that Vinge’s fictitious devise featured “silent messaging” – an ability to communicate discretely, without attracting bystanders attention. And of course the public was offended by the perception of invasion of privacy created by the Glass’ recording capability.

So it looks like AR will have to wait until technology matures to allow easy, silent and discrete input. It will be as much of revolution as the mouse, which enabled PCs and started a new era. We don’t know yet what it will be – it might be gesture capture, eye movement tracking, brain wave reading, something else altogether or some combination of multiple methods.

And earlier this year Facebook announced that it also wants to jump on the AR bandwagon. Hopefully by 2025 Augmented Reality becomes a part of everyday life.

Partials book series review

Dan Wells may have entered a writing contest with Veronica Roth a few years back. The object of the contest was to write a Young Adult post-apocalypses dystopian trilogy set in a small human community surviving in a formerly major US city. The main character is a teenage girl who becomes a hero bearing arms and fighting for her community. The situation was created through large-scale genetic work on humans carried out by the US Government. Veronica Roth entered the Divergent series. Dan Wells countered with the Partials series.
Well, the results of the contest are in and Roth won. Her Divergent series is popular and is being made into a popular movie series. Wells’ Partial series doesn’t even have a Wikipedia entry.
The Partials series (Partials, Fragments and Ruins novels, plus bonus Isolation story) is not awful, it just has very uneven quality, good writing mixed with bad. Action is fast paced and readable. Characters are well developed, but there is a discontinuity: it is never explained how the main character, a regular teenage girl, all of a sudden becomes an “army of one” soldier, killing time and again and never having a second thought. Divergent’s Tris, who killed Will in the split second action during gunfight and forever agonizes over that decision, is more believable.
The premise of the Partials series is a full scale conflict between humans and large numbers of alienated bio-engineered humanoids. This problem is a recurring theme in Science Fiction, dating back at least to Heinlein’s “Jerry Was a Man”, which tells us that the topic is interesting and explores real moral problems. Unfortunately, the series falls into the typical YA pitfall – the teens are the only ones with the sense to do anything right and save the world while adults at best stand around. And it falls into that pitfall hard, as the first novel develops the story with this gem: a 16 year old intern bests all of the scientific research performed to date and the only ones to act on her insight are a bunch of teenagers.
The author shows his creativity in describing how climate can be changed by genetically engineered microorganisms. Unfortunately, his best creative idea is not well integrated into the plot. The climate change is unbelievably quick and is used simply to pile on additional obstacles to the heroic humans toward the end of the series. This was just irritating to me.
Intended or not, the book makes a point about importance of separation of power. In the human community, government becomes tyrannical, ruling by fear and going so far as to mandate serial pregnancy from age 16. This is possible because all the power is concentrated in the hands of a 20-member Senate (in reality, even smaller 5 member core Senate), which makes laws, executes them and also judges citizens.
There is an interesting angle on the ethics of science. In this book series, well intentioned scientists attempted to subvert evil plans of the government, but ended up creating even bigger monstrosity in the process.
While the beginning is of questionable quality, so is the ending. Lighthearted happy ending effectively erases sacrifice of the immediately preceding battle scene. How important could be a self sacrifice of a Heron, who was known as a cynic who cared for her self preservation before everything else? I guess not very important if even her comrades shrug it off and get busy setting up dating arrangements before her body is cold. Need I compare this to the powerful punch-in-the-gut conclusion of the Divergent series?
In summary, an interesting topic, some creative ideas, but poor execution.

Teleportation will change, not destroy transportation industry

Suppose all humans could exercise what we today consider supernatural abilities, such as teleportation (moving objects by the force of thought) or self-healing (correcting medical conditions by the force of one’s own will). What would be an economic impact of such universal super abilities?

Sci-Fi grandmaster Robert Heinlein explores this problem at least twice: first, briefly, in Lost Legacy novel, a part of Assignment in Eternity book, then at length in Stranger in a Strange Land. Heinlein’s teleportation is without apparent limits – anyone could do it, with any object and at any distance. Heinlein believes the impact would be a total transformation of society. Granted, in both books these 2 abilities are only some of the superpowers he describes as part of larger framework of human empowerment. Yet the implication is there – super abilities would do more than disrupt certain industries or even the economy as a whole. They would profoundly change the way people live and organize their society.

Alfred Bester in The Stars My Destination limits his query to teleportation (jaunteing), which would be teachable to everyone. A person can only jaunte himself and there are minor limits on this ability – the necessary to know geographic location, elevation and situation of the origin and destination. Yet even this more limited ability is so disruptive to economy and society that it results in a war.

I suggest that the impact will be more limited. In his book The Witling Vernor Vinge describes a humanoid species who developed ability to sense densities remotely (seng), attack other beings by force of thought (keng) and teleport objects (reng). Importantly, degree of people’s ability varies widely, from extraordinary to nearly non-existent. Doesn’t this variation sound more reasonable? Vinge also posits very real physical limits to teleportation – quite understandable, given that he was in Computer Science before he was a professional writer. Vinge’s teleportation is subject to the law of linear momentum conservation remains in force, which has profound implications. Planet’s rotation means that “stationary” objects in different points on the surface of the globe actually move with a different speed (highest on the equator, zero on the poles). Movement direction (velocity vector) also differs: objects on the opposite sides of the globe would move in opposite directions. Because of that object teleported a significant distance across the surface of the globe would arrive at their destination traveling at a high speed. Rocks can withstand being thrown at high speed, but anything valuable would be destroyed. Because of this, teleportation is practical only at a short range. Take difference in abilities and limited range together and you arrive at very reasonable conclusion, explained by Vinge this way: “and really, most people depended on professional rengers for long-distance jumps, anyway”.

So there you go: even if teleportation was available for everyone, transportation industry will still exist, because it wouldn’t be easy for everyone to teleport everything one needs accurately. No one wants to find himself in Joe’s garage to explain: “Oh hi Joe, I didn’t mean to visit you – I actually wanted to drop in on Bob to check if I by mistake sent my new chair to his house instead of mine”. All people are different, and some people will leverage their superior abilities into profession. Compare it to driving – even though most people in the US own cars, there is still a lot of commercial drivers, especially filling niche needs: taxi, school buses, transit buses, truck and so on. There will still be a need for professionals and transportation industry.

Professional medical help would be even in higher demand. Suppose one could amputate an appendix or fix a broken bone all by force of thought. Clearly these would be highly complex actions, requiring a thorough understanding of one’s own body. If you make a mistake, you could break additional bones or jumble your intestines beyond repair! We suppose all humans would be able to achieve it, yet I suspect most would stop their progression way short of that. Regular person would be able to deal with simple situations, maybe not much different from what we can do today with the home medical cabinet. Doctors would still exist, even if their methods changed quite radically.

Even with these wildest imaginable human super powers , transportation and healthcare industries will still exist.