Gizmocrazed – Future Technology News Artificial Intelligence, Medical Breakthroughs, Virtual Reality Tue, 18 Dec 2018 05:32:15 +0000 en-US hourly 1 This Snapchat screenplay is the most wanted in Hollywood right now Tue, 18 Dec 2018 05:31:54 +0000 This Snapchat screenplay is the most wanted in Hollywood right now
From the very small screen to the very big one.

Image: LightRocket via Getty Images

A screenplay about the birth of Snapchat has just been crowned the most-liked script to make the rounds in Hollywood this year, according to an annual industry list. It’s called Frat Boy Genius, and is a fictionalized docu-drama about Evan Spiegel and the creation of Snapchat. 

Frat Boy Genius earned the top spot on something called the Black List. Every year, Hollywood executives vote on which scripts that have passed their desk in the last year they liked the most. The scripts that get the most votes then make the list, in descending order. 

These are screenplays that are unproduced, which means they haven’t been made yet. But there’s a long history of Black List films getting made and going on to receive awards and praise from the highest echelons of the industry. Though it can also serve as a peek into what the most buzzy subjects in Hollywood are at the moment.

And, according to Vulture, right now that means tech, and the idiosyncratic, sometimes ill-suited to power, figures at the helm. In addition to Frat Boy Genius, films about Matt Drudge, Gawker, and Cambridge Analytica also scored high on the list.

The admiration of Hollywood executives doesn’t mean that a Snapchat movie is in the works. It just means that the power brokers were into it. So we’re going to have to wait and see whether the Snapchat ghost will make it onto the big screen.

Vulture offered a closer look at some of the winning scripts, including Frat Boy Genius — in comic book form. According to the accompanying comic, the movie sounds delightfully bonkers. In the comic strip of just one scene, we’re getting a glimpse into the world of private jets, Zuckerberg tête-à-têtes, and highly public romance that was the Snapchat early days.

The founding of tech companies has provided fruitful subject matter for movies. Most successfully, of course, The Social Network has come to define the Facebook story; during Mark Zuckerberg’s congressional hearing, a member of congress even questioned the CEO about Facemash, the hotness ranking app from Zuckerberg’s college days made famous by the film.

However, tech stories are not necessarily a formula for success. The Steve Jobs biopic starring Ashton Kutcher was not, um, well received. And some people are definitely not stoked for not one but two forthcoming movies about Uber.

Snapchat is currently struggling to maintain the user base and buzz that it once generated, particularly during its 2017 IPO. But in the aftermath of The Social Network, the journey of a startup from dorm room project to the Wall Street trading room floor is certainly a fun story to tell.

WATCH: Ashton Kutcher On Set Reveals a Spitting Image of Steve Jobs

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f3646%2f995e12fa d9cd 4d41 8f99 5ae361f867d9

In emerging markets there are no copycats, just budding entrepreneurs Tue, 18 Dec 2018 05:31:07 +0000 In emerging markets there are no copycats, just budding entrepreneurs

Every year I teach an MBA course at Stanford about the exciting opportunities for tech investors and entrepreneurs in developing economies. When we designed the syllabus back in 2013, Rocket Internet was still firing on all cylinders on four continents. The unapologetic machine built to copy big American internet companies created billions of dollars for the Samwer brothers and its backers. During Rocket’s golden years, the best startups in the developing economies seemed to inevitably have an original reference in Silicon Valley.

Accordingly, we added a class about the opportunity of replicating business models to seize this information arbitrage. Call it the second-mover advantage.

Despite my conviction about the model, the copycat word  —  short for replicating startups and attached to these ventures  —  annoyed me from the start. More than a term to describe a straightforward recipe to launch, I see it as an unconscious way to belittle an entire group of hard-charging founders and investors.

Indeed, while in foreign eyes, we have been building a Mexican Kickstarter, a Middle Eastern Uber, an Indian Amazon or a Colombian Postmates, I argue visionary founders are taking a simple idea that already exists and creating new worlds.

On the internet, there are Einsteins and there are Bob the Builders. I’m Bob the Builder. Oliver Samwer, founder of Rocket Internet

Gateway to entrepreneurship

While impact is the final goal, founders can approach the journey in different ways. The most common approach in the startup world is to use the business method, or more pompously, the design thinking methodology. “Fall in love with the problem, not the solution,” mentors keep telling a succession of startup clusters in acceleration programs. The best and “leanest” way to product market fit is by starting small then keep iterating the solution until you nail it.

A second way to start is favored by engineers and scientists: Take a new promising technology or a forgotten molecule, then find a big problem. Keep iterating until you find a problem worth solving, like a hammer looking for a nail.

A third way is starting like painters create, building skills by copying classics, or like a new chef cooks by starting with iconic recipes: replicate a proven idea and iterate until you find traction.

Until a few years ago it was ostensibly the only way to scale in developing economies. The model helped raise local capital from risk-averse investors who needed reassurance. The playbook to scale was unfolding a couple of years ahead and served as a guide to founders without previous startup experience and no local role models. The potential acquirer was identified and sometimes contacted in advance. Founders weren’t crazy and investors weren’t dumb.

Replicating a business model has served in emerging ecosystems as the gateway to entrepreneurship and venture investing.

Photo courtesy of Flickr/A_Marga

Riding the next wave

According to conventional wisdom, new ecosystems around the world grow through the following three stages, be them in developing economies or more developed countries. First, local and foreign entrepreneurs replicate successful models focused on local markets. Then as the ecosystem evolves, founders start applying existing technologies to solve local problems. Finally, as the tech space matures, new technologies begin to flourish.

In my opinion, those stages never happen sequentially as stated by ecosystem observers. Successful startups that started with a foreign inspiration can outgrow the master. If they are not bought into submission by the first mover, some of the most famous copycats reinvented the original and made it better: Mercado Libre is much more relevant in the e-commerce space than eBay. Flipkart is hardly an Amazon, not to mention WeChat. These companies are in turn some of the most prolific tech innovators on the globe. Truly ecosystems evolve organically in unique ways reflecting their history, geopolitical environment, economic structure and cultural features.

Two ways to defend the status quo: “It’s been done before” and “It’s never been done before.” –Thibault @Kpaxs

In defense of talent

Recently, it’s hard to hear American observers use the word copycat to describe any American company. After all, Guilt replicated VentesPrivees and Lime, Chinese dockless bike sharing and many more examples. All American startups are treated as innovators while the rest as mere followers.

Recently, Chinese or Indian startups seem to be given the benefit of the doubt regarding their originality. Is it because these regions have become more innovative? Maybe. But it’s also because these ecosystems have gained the respect of Silicon Valley. Indeed, Chinese consumer tech surpassed decisively the U.S. as the most important country in terms of investments.

So here’s my humble suggestion to our wealthier and more accomplished colleagues: stop using the c-word with founders. It’s offensive. Most probably, these founders are facing more challenges to build their companies and lower odds for success that the first mover. If anything, they have more merit than the originals.

As for founders, when they call you a me-too, remember all teams started somewhere, somehow. In fact, most started like Bob the Builder before turning into Einsteins. The truth is, it doesn’t matter where you start. You can start by applying a new technology or protocol. You can start with a problem you feel passionate about. You can start by replicating a business model. It doesn’t really matter if you take a big swing at the future and trust you will figure out how to make it happen. It doesn’t matter what label they use while you change the world for the better.

A Second X Chromosome Could Explain Why Women Live Longer Than Men Tue, 18 Dec 2018 03:40:27 +0000 A Second X Chromosome Could Explain Why Women Live Longer Than Men

Researchers say that women may be born with an advantage when it comes to longevity. (Credit: Pressmaster/

Women have an average life expectancy that’s about 4 years longer than men’s – regardless of culture or geography. Even among animal species, females outlive males.

Why females have an advantage in the longevity department hadn’t been well understood. In the past, some had assumed it had to do with lifestyle. But scientists say there may be a genetic mechanism underlying this age-old phenomenon. In a new study, researchers found that mice with two X chromosomes lived longer, regardless of other biological factors. Researchers say the finding suggests the second X chromosome may govern longevity and explain why women outlive men.

X Marks Longer Lifespans

All mammals are born with two sex chromosomes. Females have two X chromosomes, whereas males have one X and one Y. X chromosomes are necessary for survival and contain important genes related to the brain. Y chromosomes, on the other hand, are found only in males and are not crucial for survival. Y chromosomes carry relatively few genes beyond those related to secondary sex characteristics such as male genitals and facial hair.

To investigate the link between chromosomes and survival, researchers tested different chromosome and gonad combinations among genetically identical mice. Some mice had biological male or female combinations mirroring those found in nature — XX with ovaries and XY with testes. Other mice had XX chromosomes paired with testes and XY chromosomes paired with ovaries.

Researchers found that mice with natural female mouse biology — two X-chromosomes and ovaries — outlived all the mice. But mice with two X-chromosomes tended to live longer, regardless of whether they had ovaries or testes. Among this group of mice, the longevity effect was observed beginning at 21 months, which is at the end of a normal mouse lifespan. Researchers say the results point to a potential role of the second X chromosome in longer lifespans.

“This suggests that the hormones produced by female gonads increase lifespan in mice with two X chromosomes, either by influencing how the mouse develops or by activating certain biological pathways during their lives,” said Dena Dubal, a neurologist and senior author of the study published in Aging Cell.

Scientists don’t understand exactly why the second X chromosome contributes to a longer lifespan. It may be that the second X and its genetic expression has a protective effect that increases survival. Another theory is that the presence of a Y chromosome is somehow harmful. However, the scientists hope to understand this interplay by embarking on future chromosonal studies.

“When things go wrong in aging, having more of the X chromosome, along with its diversity of expression, could be really beneficial,” Dubal said.

How Dense Does a Body Have to Be to Break a Concrete Floor? Tue, 18 Dec 2018 02:53:19 +0000 How Dense Does a Body Have to Be to Break a Concrete Floor?

I often miss some cool stuff the first time I watch a movie. This is probably a good thing—it shows that I’m focused on the story and not the small details. In this case, the movie is 2016’s Captain America: Civil War and the scene involves the density of a character named Vision.

OK, I am going to give a SPOILER ALERT—but if you haven’t seen this movie yet, I have a feeling you won’t be upset about spoilers. Anyway, this scene doesn’t reveal any huge plot points.

So here’s the deal. Vision is trying to keep Wanda (Scarlet Witch) safe in the Avengers’ headquarters. Hawkeye comes to help her leave, but Vision catches them. Although Vision could easily defeat Hawkeye, the same cannot be said for the powers of Scarlet Witch. Scarlet Witch has some ability to control matter—and in this case it appears that she can activate Vision’s powers. One of Vision’s primary powers is his ability to change his density.

So with a bit of magic, Scarlet Witch increases Vision’s density up to the point were he becomes too massive to move. He grows so massive that he breaks through the floor. With Vision out of the way, Wanda and Hawkeye are free to leave and finish the rest of the movie.

Density and Mass of Vision

Now for the fun part. What was the density and mass of Vision when he crashed through the floor? How about a quick review of density? Take a look at these five objects.

Rhett Allain

These blocks are all different, but there is something similar about them. If you took the three blocks on the left, they all have the same mass (about 45 grams). The three blocks on the right all have the same volume (I’m disappointed that they are almost exactly 1 cubic inch—they should have some value in cm3). But wait! What if you take the mass of each block and divide by its volume? This is how we define density. The density is a property that doesn’t depend on the size of the object, just its material. So the two white objects (on the ends) have different volumes and different masses, but the same density. The same is true for the two black objects.

To estimate the mass and density of Vision, I need some particular event that gives a hint about his mass since you can’t “see” the mass of an object. Yes, you guessed it: I can use moment that Vision breaks through the floor to estimate his mass.

Here is what I’m going to do. I’m going to assume the floor is made of concrete and that the gravitational force on Vision (due to his large mass) is enough to exceed the compressive strength of concrete to initiate the break.

What is “compressive strength”? This is the pressure a material can withstand before breaking. Yes, it’s the pressure and not the force (remember that pressure is the force divided by the contact area). This is why you can more easily break a material with a sharp pointy object than you can with a big flat object. The pointy object has a smaller area and therefore you get a bigger pressure for the same amount of force.

But what about the compressive strength of concrete? It’s perhaps between 20 and 40 mega Pascals (MPa) where a Pascal is the same as one Newton per square meter. This means that if the floor breaks, I know the pressure from the force between Vision and the floor. If I estimate his contact area, I can then calculate the force and next his masses.

Really, the only thing left to estimate is the contact area. I could perhaps do a more detailed analysis, but I think it’s fine to just get a rough value. What about a contact area that is a rectangle with a length of 1 meter and a width of 0.5 meters? That would put the area at 0.5 m2. I’m going with that.

Oh, one more thing. If I want to calculate the density of Vision, I also need his volume. He looks like a normal human—at least in terms of size. Humans have a density close to 1000 kg/m^3 (the density of water). If a human has a mass of 75 kg, the volume would be around 0.075 m3. I’m going with that value.

Let’s crunch the numbers. I’m including the calculations in this python script so that you can put your own values in (if you don’t like mine). Just click the “pencil” to edit and “play” to run it if you change any of the values.

Just to be clear, that is massive. The density is extreme (it’s not neutron-star-level density though). Actually, it’s sort of difficult to visualize a mass that large. How about this? What would be the size of a spherical asteroid of that same size? If the asteroid is made of normal stuff, it might have a density of 3,000 kg/m^3. With the same mass as Vision, a spherical asteroid would have a diameter of around 10 meters (30 feet). That’s one big old rock.


You know (or you should know) that I can’t just stop there. There are many questions left unanswered. I would normally just assign these as homework, but let me answer two of these questions for you.

Would there be a noticeable gravitational force between Vision and Hawkeye due to the large mass?

There is a gravitational interaction between all objects with mass. Normally on the surface of the Earth we only deal with the gravitational force between an object and the other. Interactions between two objects (like people) are usually so small that you would never be able to measure them. In this case, however, one of those people has a giant mass.

The magnitude of the gravitational force depends on both the masses of the objects and the distance between them. If you assume the objects are point masses (not true but an OK approximation), then the following equation calculates the force.

The G is just the universal gravitational constant with a value of 6.67 x 10-11 Nm2/kg2. If I assume a distance of 1.5 meters between Hawkeye and Vision, the gravitational force between them would be 0.0034 Newtons. That is a pretty tiny force. In fact, if you put a paperclip on top of Hawkeye’s head, the weight of this paperclip would be more than twice the gravitational pull from Vision. I don’t think Hawkeye would notice it.

Assuming Scarlet Witch increases Vision’s density at a constant rate, how long will it take for him to have a mass equivalent to the Earth?

If you watch a clip of the scene, it seems clear that Scarlet Witch starts influencing Vision’s mass when his head gem turns from yellow to red. Vision drops to his knees 13.9 seconds later. The floor also starts to crack at this point. Finally, after 20.4 seconds, Vision crashes through the floor.

Assuming a constant rate for the increase of mass (and thus density), the mass increases at 100,000 kilograms per second. If this mass increase rate stays constant, it would take 5 x 1019 seconds to get up to the mass of the Earth (6 x 1024 kg). Hint: that time is super, super, super long. It’s not going to happen. But it was still fun to calculate.

Here are a few more homework questions for you:

  • How long (assuming a constant mass increase rate) until Vision’s mass reaches the point where Hawkeye gets pulled to Vision?
  • If you consider the relationship between mass and energy (E = mc2), how much energy would it take to increase Vision’s mass? What about the power? How does this compare to the power output of the Sun?
  • How large would Vision’s mass need to get before he became a black hole?

More Great WIRED Stories

This fake package covers porch thieves in glitter and fart spray Mon, 17 Dec 2018 23:34:48 +0000 This fake package covers porch thieves in glitter and fart spray

Having a package stolen off your front porch sucks. No matter what’s inside the box, it just feels… violating. Someone came into your space and took your stuff just because they could probably get away with it. And even if you go to the cops with license plates and high-res face photos, they’ll often respond with a big, apathetic shrug (particularly around Christmas when package thefts skyrocket).

After having one of his own packages nabbed, engineer/YouTuber Mark Rober decided to take things into his own hands. He built a box that… well, it’ll make any would-be thieves think twice before hitting his house again. And probably make them have to go buy a really good vacuum.

Here’s the video:

In what might be the most wonderfully over-engineered act of lighthearted retaliation to ever exist, this thing is just layer upon layer of ingenuity.

It starts with a GPS tracker that lets Mark know when the box has been moved.

As soon as it’s opened, a custom-built spinning tub flings ridiculously fine glitter in every direction, covering whoever opened it from head to toe (or, in many of the filmed cases, from car door to car door). Look for the slo-mo glittersplosion at around the four-minute mark — that alone is a work of art.

A few seconds later comes a blast of canned fart spray. Or, I should say, the first blast of canned fart spray… because it keeps coming (partly in hopes that the thief throws out the box, allowing Mark to use the GPS tracker to recover it).

Oh, and the whole thing is being filmed (and uploaded online!) from basically every angle, thanks to a very carefully aligned rig of four hidden cameras.

And there’s more! I don’t want to spoil it, but everything down to the tiny details of the box itself were planned out to make thieves feel a little bit more silly after the glitter settles.

Now, this probably isn’t something you should try at home. Building packages that use hidden switches and circuit boards to do unexpected things when you open them seems like something that can land you on a list. But holy wow, watching it is therapeutic.

Want to go deeper? His co-builder on this project, Sean Hodgins, has a tear down video about the engineering involved.

Penguin Poop, Seen From Space, Tells Our Climate Story Mon, 17 Dec 2018 14:53:16 +0000 Penguin Poop, Seen From Space, Tells Our Climate Story

Satellites watch many things as they orbit the Earth: hurricanes brewing in the Caribbean, tropical forests burning in the Amazon, even North Korean soldiers building missile launchers. But some researchers have found a new way to use satellites to figure out what penguins eat by capturing images of the animal’s poop deposits across Antarctica.

A group of scientists studying Adélie penguins and climate change have found that the color of penguin droppings indicates whether the animals ate shrimp-like krill (reddish orange) or silverfish (blue). The distinction is interesting because the penguin’s diet serves as an indicator of the response of the marine ecosystem to climate change. Separate research is starting to show, for example, that penguin chicks that are forced to rely on krill as their main source of food don’t grow as much as those who have fish in their diet.

The penguins’ guano deposits build up over time on the rocky outcroppings where the birds congregate, making them colorful landmarks. The researchers took samples from the penguin colonies, found their spectral wavelength, then matched this color to images taken from the orbiting Landsat-7 satellite.

“There’s a clear regional difference, krill on the west, fish on the east,” says Casey Youngflesh, a postdoctoral researcher at the University of Connecticut who presented his findings last week at the annual meeting of the American Geophysical Union in Washington. It’s the first time that scientists have been able to track diet from space, and researchers say it’s a new tool for looking at how certain seabird and penguin populations are doing on other regions of the planet.

Knowing what, and how much, five million breeding pairs of Adélie penguins are eating is important because it tells researchers how the base of the food chain is doing. The population of tiny krill has crashed on the western side of the Antarctic peninsula, the 800-mile thumb that sticks up toward the tip of South America. Rapidly warming, changing climactic conditions as well as a huge increase in industrial-scale fishing, have taken a toll on these small crustaceans.

Krill are harvested commercially for use in pet food and nutritional supplements, but for many penguins, it’s the basis of their diet. As krill have become more scarce, so, too, have the penguins in western Antarctica who like to eat them. “Diet can tell us how food webs are shifting over time,” says Youngflesh. “It would take a lot of time and a lot of money to visit all these sites. Climate change is extremely complicated and we need data on large scales.”

Youngflesh says he hopes the color-coded poop maps can be used to track penguin populations in the future, as well as other seabirds across the globe. That’s because seabirds aggregate in the same places as penguins and eat the same things. Of course, this form of remote sensing can’t tell researchers how penguins’ diets compare across time. So one researcher dug through the guano itself in search of insights into the penguins’ history.

“There are unanswered questions about when did they arrive, how have their diets changed over time,” says Michael Polito, assistant professor of oceanography and coastal sciences at Louisiana State University. “Those are questions satellites can’t answer, and it was my job to dig it up.”

Michael Polito/Louisiana State University

Polito excavated mounds of guano, feathers, bones and eggshells on the remote Danger Islands, a large penguin colony on the tip of the Antarctic Peninsula that has remained mostly free of human visitors. When he reached the bottom of the pile, he took the material back to his lab and applied radiocarbon techniques to figure out the age of the first penguin settlers. He found that the penguins have been living on Danger Island for nearly 3,000 years. Since Adélie penguins need access to ice-free land, open water and a plentiful food supply to feed their baby chicks, the presence or absence of a penguin colony is a sign of the climate conditions at the time, Polito says. Polito’s new study pushes back the time of penguin’s arrival there by 2,200 years for that region and confirms other data taken from ice cores and sediments about the history of that region’s climate.

“This ability to estimate penguin diets from space will be a real game changer for science in Antarctica,” Polito said. “It really takes a lot of time and effort to figure out what penguins eat using traditional methods so being able to evaluate diets all around the Antarctic continent from space is a pretty amazing leap forward.”

The combination of digging through poop and analyzing images from satellites is giving researchers a better handle on possible trouble spots for the Adélie penguin, as well as its cousins the chinstrap, Gentoo and emperor penguins. The laboratory of Heather Lynch, associate professor of ecology and evolution at Stony Brook University, put together a nifty continent-wide map of penguin colonies from the four species, and is using citizen volunteers to count them one by one. Lynch’s group is also beginning to look back at previous satellite images taken from the 1980s until now to see if they can establish the same penguin poop-diet connection.

More Great WIRED Stories

Delivery robot catches fire at university campus, students set up vigil Mon, 17 Dec 2018 05:31:48 +0000 Delivery robot catches fire at university campus, students set up vigil
A KiwiBot burst into flames on UC Berkeley’s campus.

Image: kiwibot

A KiwiBot, an automated food delivery robot which is present on UC Berkeley’s campus, caught fire on Friday afternoon.

In a post, the company explained the incident was due to a faulty battery which had been mistakenly installed instead of a functioning one. 

The errant battery started smoldering while the robot was idling, leading to smoke, then fire outside the Martin Luther King Jr. Student Union.

“A member of the community acted swiftly to extinguish the flames using a nearby fire extinguisher. Within moments of the incident occurring, it had already been contained,” the post read.

“The Berkeley Fire Department arrived shortly thereafter to secure the scene, and doused the robot with foam ensuring there was no risk of re-ignition.”

Some people caught the incident on camera, where a crowd gathered around the blaze.

After the incident, students set up a candlelight vigil for the fallen robot. On Facebook, the robot was called a “hero” and a “legend,” according to The Daily Californian.

KiwiBot added that following the incident it pulled robots from service, and that orders in progress were delivered by hand. 

The company added no customers or members of the public were at risk, and that it has installed custom software to monitor the battery’s state.

Since 2017, KiwiBot has been used to deliver food around the UC Berkeley campus, where there are more than 100 robots operating as part of the fleet.

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f90008%2fdd186ebd 7d6c 4b68 96e9 15d150308d7c

They scaled YouTube — now they’ll shard everyone with PlanetScale Mon, 17 Dec 2018 05:31:02 +0000 They scaled YouTube — now they’ll shard everyone with PlanetScale

When the former CTOs of YouTube, Facebook and Dropbox seed fund a database startup, you know there’s something special going on under the hood. Jiten Vaidya and Sugu Sougoumarane saved YouTube from a scalability nightmare by inventing and open-sourcing Vitess, a brilliant relational data storage system. But in the decade since working there, the pair have been inundated with requests from tech companies desperate for help building the operational scaffolding needed to actually integrate Vitess.

So today the pair are revealing their new startup PlanetScale that makes it easy to build multi-cloud databases that handle enormous amounts of information without locking customers into Amazon, Google or Microsoft’s infrastructure. Battle-tested at YouTube, the technology could allow startups to fret less about their backend and focus more on their unique value proposition. “Now they don’t have to reinvent the wheel” Vaidya tells me. “A lot of companies facing this scaling problem end up solving it badly in-house and now there’s a way to solve that problem by using us to help.”

PlanetScale quietly raised a $3 million seed round in April, led by SignalFire and joined by a who’s who of engineering luminaries. They include YouTube co-founder and CTO Steve Chen, Quora CEO and former Facebook CTO Adam D’Angelo, former Dropbox CTO Aditya Agarwal, PayPal and Affirm co-founder Max Levchin, MuleSoft co-founder and CTO Ross Mason, Google director of engineering Parisa Tabriz and Facebook’s first female engineer and South Park Commons founder Ruchi Sanghvi. If anyone could foresee the need for Vitess implementation services, it’s these leaders, who’ve dealt with scaling headaches at tech’s top companies.

But how can a scrappy startup challenge the tech juggernauts for cloud supremacy? First, by actually working with them. The PlanetScale beta that’s now launching lets companies spin up Vitess clusters on its database-as-a-service, their own through a licensing deal, or on AWS with Google Cloud and Microsoft Azure coming shortly. Once these integrations with the tech giants are established, PlanetScale clients can use it as an interface for a multi-cloud setup where they could keep their data master copies on AWS US-West with replicas on Google Cloud in Ireland and elsewhere. That protects companies from becoming dependent on one provider and then getting stuck with price hikes or service problems.

PlanetScale also promises to uphold the principles that undergirded Vitess. “It’s our value that we will keep everything in the query pack completely open source so none of our customers ever have to worry about lock-in” Vaidya says.

PlanetScale co-founders (from left): Jiten Vaidya and Sugu Sougoumarane

Battle-tested, YouTube-approved

He and Sougoumarane met 25 years ago while at Indian Institute of Technology Bombay. Back in 1993 they worked at pioneering database company Informix together before it flamed out. Sougoumarane was eventually hired by Elon Musk as an early engineer for before it got acquired by PayPal, and then left for YouTube. Vaidya was working at Google and the pair were reunited when it bought YouTube and Sougoumarane pulled him on to the team.

“YouTube was growing really quickly and the relationship database they were using with MySQL was sort of falling apart at the seams,” Vaidya recalls. Adding more CPU and memory to the database infra wasn’t cutting it, so the team created Vitess. The horizontal scaling sharding middleware for MySQL let users segment their database to reduce memory usage while still being able to rapidly run operations. YouTube has smoothly ridden that infrastructure to 1.8 billion users ever since.

“Sugu and Mike Solomon invented and made Vitess open source right from the beginning since 2010 because they knew the scaling problem wasn’t just for YouTube, and they’ll be at other companies five or 10 years later trying to solve the same problem,” Vaidya explains. That proved true, and now top apps like Square and HubSpot run entirely on Vitess, with Slack now 30 percent onboard.

Vaidya left YouTube in 2012 and became the lead engineer at Endorse, which got acquired by Dropbox, where he worked for four years. But in the meantime, the engineering community strayed toward MongoDB-style non-relational databases, which Vaidya considers inferior. He sees indexing issues and says that if the system hiccups during an operation, data can become inconsistent — a big problem for banking and commerce apps. “We think horizontally scaled relationship databases are more elegant and are something enterprises really need.

Database legends reunite

Fed up with the engineering heresy, a year ago Vaidya committed to creating PlanetScale. It’s composed of four core offerings: professional training in Vitess, on-demand support for open-source Vitess users, Vitess database-as-a-service on PlanetScale’s servers and software licensing for clients that want to run Vitess on premises or through other cloud providers. It lets companies re-shard their databases on the fly to relocate user data to comply with regulations like GDPR, safely migrate from other systems without major codebase changes, make on-demand changes and run on Kubernetes.

The PlanetScale team

PlanetScale’s customers now include Indonesian e-commerce giant Bukalapak, and it’s helping, GitHub and New Relic migrate to open-source Vitess. Growth is suddenly ramping up due to inbound inquiries. Last month around when Square Cash became the No. 1 app, its engineering team published a blog post extolling the virtues of Vitess. Now everyone’s seeking help with Vitess sharding, and PlanetScale is waiting with open arms. “Jiten and Sugu are legends and know firsthand what companies require to be successful in this booming data landscape,” says Ilya Kirnos, founding partner and CTO of SignalFire.

The big cloud providers are trying to adapt to the relational database trend, with Google’s Cloud Spanner and Cloud SQL, and Amazon’s AWS SQL and AWS Aurora. Their huge networks and marketing war chests could pose a threat. But Vaidya insists that while it might be easy to get data into these systems, it can be a pain to get it out. PlanetScale is designed to give them freedom of optionality through its multi-cloud functionality so their eggs aren’t all in one basket.

Finding product market fit is tough enough. Trying to suddenly scale a popular app while also dealing with all the other challenges of growing a company can drive founders crazy. But if it’s good enough for YouTube, startups can trust PlanetScale to make databases one less thing they have to worry about.

Crispr Scandal: How Do You Publish a Scientific Villain's Data? Mon, 17 Dec 2018 02:53:15 +0000 Crispr Scandal: How Do You Publish a Scientific Villain's Data?

How do you handle the data of a scientist who violates all the norms of his field? Who breaches the trust of a community that spans the entire globe? Who shows a casual disregard for the fate of the whole human species?

On the one hand, you might want to learn from such a person’s work; to have a full and open dissection of everything that went wrong. Because, spoiler, there was a lot that went wrong in the case in question. But rewarding such “abhorrent” behavior, as one scientist put it, with a publication—the currency of the scientific world—would send a message that ethical rules only exist to be broken.

This is the precarious situation in which we find ourselves today, as scientists hash out the next chapter of the human gene-editing scandal that erupted two weeks ago, when the Chinese scientist He Jiankui revealed that for the last two years he has been working in secret to produce the world’s first Crispr-edited babies. Scientists denounced the work with near-unanimous condemnation, citing its technical failures as well as its deep breaches of ethical (and possibly legal) lines. What’s much less certain is what should happen to the work, now that it’s been done.

Hours after He presented data on the twin girls at an international genome editing summit in Hong Kong, copies of his slides were already circulating in email inboxes and on Twitter. Scientists scrutinized the work, 280 characters at a time, and pointed out all the questions that remained unanswered. It was the kind of conversation that normally would take place under the auspices of a journal. But He, who made his announcement over YouTube, has so far produced no manuscript for public consumption. A paper describing this work is reportedly under peer review, and a second one about additional Crispr experiments in human embryos was rejected by an international journal over ethical and scientific concerns, STAT reported Monday morning.

Scientists are beginning to grapple with the very real possibility that He’s work may never be awarded publication status, along with its attendant sheen of legitimacy. And that may be the academic justice he deserves. But it also highlights an intractable tension embedded in scientific publishing: policing bad actors comes at the cost of scientific censorship.

“It’s a very dicey issue,” says Michael Eisen, a molecular biologist at University of California, Berkeley, and a staunch advocate of open-access publishing. “There need to be consequences for people who do things that are deemed to be unethical. You don’t want to have a system that gives people reasons to just randomly experiment on people.”

The scientific publishing system, imperfect as it may be, has remained relevant in an era where anyone can buy a URL, self-publish a paper, and push it out to social media platforms reaching millions of people all in the span of an afternoon. The reason is that data wants to be seen in context, in conversation with other data. Through the connective tissue of citations, scientific journals establish a common set of vetted facts to debate, challenge, and be inspired by. They ensure some modicum of permanence to those facts; so that people today, tomorrow, and 100 years into the future can all point to the same digital object identifier assigned at publication and know that they’re all talking about the same thing.

What then are the scientific costs to building a foundation for the field of human germline editing with one very consequential brick conspicuously missing? Disappearing the data down a memory hole presents logistical challenges as well as philosophical ones. Does the original sin of He Who Must Not Be Named preclude society from studying these twin babies as they grow up and maybe have children of their own? Addressing these questions will require decoupling the knowledge-building purpose of scientific publishing from the career-building one.

Now, lest you think these are just #ivorytowerproblems, let’s be real for a second. There are going to be more Crispr babies. Maybe not next year or the year after that. But they’re coming, and not just in China. Last week, Harvard researchers announced that they plan to edit the DNA of human sperm to see if it’s possible to create IVF babies with lower risks of developing Alzheimer’s later in life. All around the world, researchers are doing studies in mice and monkeys, filing patents, and starting companies, all with an eye toward a future where germline editing becomes a legal, socially acceptable technology. How the scientific community responds in the present moment will have huge consequences for how, and how fast, that happens.

“You would hate for some future experiment to fail or have some problem that could be avoided had people studied what happened here,” says Eisen. “In some sense there might even be an ethical duty for people to consider what was done.” Despite the uproar among scientists, they have not backed a moratorium, and embryo editing is ongoing.

During the Hong Kong summit, an audience member asked He if he would be willing to post his work to a public forum, such as the biology preprint server bioRxiv, so the scientific community could have access to the data. He said that the journal considering his manuscript had advised against posting anything to bioRxiv until the paper had passed peer review. He did not specify which journal. Nor did He return WIRED’s requests for comment. But scientists who have seen the manuscript doubt it will pass peer review any time soon, if ever.

“It was a very shoddy paper, very incomplete. What I saw wouldn’t pass any journal,” says Eric Topol, a cardiologist and director of the Scripps Research Translational Institute who reviewed He’s manuscript for the Associated Press. Other scientists have also denounced the experiment as a technical failure, based on the slides He presented in Hong Kong.

The edit He was trying to mimic was a 32-base pair deletion to the CCR5 gene that occurs naturally in some people with Northern European ancestry. Having two copies of that specific mutation leads to zero production of the CCR5 receptor, which HIV uses to access human immune cells. Instead, He introduced two new, unstudied mutations in one twin, Nana. In the other, Lulu, Crispr only managed to edit one copy of the CCR5 gene, again with a novel alteration. That means her healthy copy will still make CCR5 and she will likely still be susceptible to HIV. No one knows if the random mutations will provide a protective effect. They might even be harmful. Not only that, but early data suggests that both girls have a patchwork of edited and non-edited cells; a phenomenon known as mosaicism.

The work’s moral failings are equally numerous. Besides choosing to cripple a normal gene to reduce the risk of a preventable, controllable disease neither child had, He personally took study participants through the informed consent process, in which he had no training, and during which he falsely described his work as an “AIDS-vaccine development project.” The consent documents made no mention of the risks involved in disabling the CCR5 gene—including the potential for increased susceptibility to other viruses like West Nile and influenza. And the hospital where He claimed to have ethical approval denied knowledge of any such project and said in a statement that the signatures on the approval form are suspected to be forgeries.

The dilemma now, Topol says, is whether any publication or preprint server should be party to something so deeply sunk in a moral morass. “This hasn’t come up before because nothing has breached the ethics of human research like this,” says Topol. “ It’s highly problematic to publish it anywhere.”

That includes bioRxiv, which was launched in 2013 by scientists at Cold Spring Harbor Laboratory to make scientific information available faster. Submissions to bioRxiv go through a quick (24-48 hour) screening process that filters out obviously non-scientific material, plagiarism, and any thinly veiled submissions by activists or AI. Scientists wanting to upload human studies have to list registered clinical trial IDs, meaning the studies have passed some form of ethical review.

He’s Crispr baby work was technically listed with China’s clinical trial registry, but it does not appear he sought prior approval from federal regulators. According to the AP, the study was listed on November 8, 2018, long after it began. Richard Sever, a molecular biologist and bioRxiv co-founder declined to comment on He’s work specifically, but he did say that the preprint server would exercise its right to turn away any papers with known ethical or legal violations. “Our intention is not to provide a platform that seems to endorse or encourage unethical work,” says Sever. “That would be a very dangerous precedent for bioRxiv.”

All this hand-wringing over the moral complicity of publishing platforms raises a tree-falling-in-the-forest line of existential questioning: If no one will publish what He did, does that mean it’s not science?

Depends on what you mean by that.

Science with a small “s” is a human enterprise as old as humanity itself. Nibbling on that tasty-looking mushroom and waiting a few hours to see if you get sick? That’s hypothesis testing. Try it a few more times with successively bigger bites, maybe add a bit of open-fire cooking; you’ve got a scientific method going. He’s human experiment is clearly science in this sense.

Whether it will become Science with a big “S” remains to be seen. This more rigorous meaning of Science—which seeks to accrue knowledge by progressively, and systematically, reducing uncertainty—has only been around a few hundred years. Its arrival was marked by the development of the scientific paper, published in the pages of peer-reviewed journals. Before the 1600s, scientists communicated over private correspondence or in lectures. The scientific paper then became, and still is, the enabling unit of Science as a progressive, global enterprise.

So what then, is to be done with the work of researchers like He, who step outside the bounds of acceptable Science? It’s a question that has mostly only come up in a backward-looking way, to studies that might have met the ethical standards of the day but have since been roundly denounced. The Tuskegee study—which denied African-American men syphilis treatment—comes to mind, as does Operation Sea-Spray, the US Navy’s fatal release of pathogenic bacteria over San Francisco.

Then you have the case of Edward Jenner, who in the 1790s began experimenting on people with cowpox, injecting them with material taken from diseased dairy cows to see if it would protect them against smallpox. The Royal Society rejected his paper on the topic. Feeling it was an important public health contribution, Jenner published his case studies privately. The account led to the formation of mass vaccination campaigns and the eventual eradication of smallpox from the face of the Earth.

He’s few public statements have hinted at his ambitions to be a modern-day Jenner, ambitions that may have blinded him to his transgressions. Now the scientific establishment will have to decide if it too will wear blinders. Never before has the academic publishing world had to contend in real time with research that nearly everyone agrees was profoundly wrong. And if anything, the last two weeks have made it all too clear just how unprepared anyone is to do that.

More Great WIRED Stories

Confirmed! Those LIGO Gravitational Wave Signals Were Real Sun, 16 Dec 2018 14:53:19 +0000 Confirmed! Those LIGO Gravitational Wave Signals Were Real

After the historic announcement in February 2016 hailing the discovery of gravitational waves, it didn’t take long for skeptics to emerge.

The detection of these feeble undulations in the fabric of space and time by the Laser Interferometer Gravitational-Wave Observatory (LIGO) was said to have opened a new ear on the cosmos. But the following year, a group of physicists at the Niels Bohr Institute in Copenhagen published a paper casting doubt on LIGO’s analysis. They focused their criticism on the experiment’s famous first signal, a squiggly line—representing the collision of giant black holes more than a billion light-years away—that was printed in newspapers worldwide and tattooed on bodies.

Even as LIGO sensed more gravitational-wave signals and its founders received Nobel Prizes, the Copenhagen researchers, led by professor emeritus Andrew Jackson, claimed to have found unexplained correlations in the “noise” picked up by LIGO’s twin detectors. The detectors — L-shaped instruments whose arms alternately stretch and squeeze when a gravitational wave passes — are located far apart in Livingston, Louisiana, and Hanford, Washington, to ensure that only gravitational ripples from space could wiggle both instruments in just the right way to produce the telltale signal. But according to Jackson and his team, the correlations in the noise data suggested that LIGO might have detected not gravitational waves but some terrestrial disturbance, perhaps an earthquake. They claimed that, at the very least, something was not right with the instruments or with the LIGO scientists’ analysis.

The findings were worrisome. LIGO scientists checked their work again, and a party of experts visited the Niels Bohr Institute last year to dig into the details of Jackson and colleagues’ algorithms. Two groups of researchers set out to independently analyze LIGO’s data and the Copenhagen group’s code.

Now both groups have completed their studies. The new papers explain different aspects of the problem that led Jackson and his coauthors to make their claim. Both analyses definitively conclude that the claim is wrong: There are no unexplained correlations in LIGO’s noise.

“We see no justification for lingering doubts about the discovery of gravitational waves,” the authors of one of the papers, the physicists Martin Green and John Moffat of the Perimeter Institute for Theoretical Physics, said in an email.

The pair has no direct ties to LIGO. “It’s important for science for people to do analysis of data and results independently of the group,” Moffat said, “especially for such a historic event in the history of physics.”

The LIGO gravitational-wave detectors in Hanford, Washington (here), and Livingston, Louisiana.
LIGO Lab/Caltech/MIT
LIGO Lab/Caltech/MIT

Frans Pretorius, a gravitational-wave expert at Princeton University who was not involved in any of the recent studies, said that for more than a year, he and most of the physics community have been satisfied that LIGO’s analysis, and its discovery, are sound. Nevertheless, he said, “it’s important that finally there is a thorough analysis in the form of a paper,” rather than “media back and forth.”

The spokesperson of the 1,200-person LIGO Scientific Collaboration, David Shoemaker of the Massachusetts Institute of Technology, said by email that the new findings corroborate internal discussions among the team. “Seeing those two non-Collaboration re-analyses does reaffirm my certainty that the detections [of gravitational waves] are genuine,” Shoemaker said, “and also is a reinforcement of our earlier perception of where the Jackson et al. paper has problems.”

In an email, Jackson called Green and Moffat’s paper, which was published in Physics Letters B in September, “absolute rubbish.” When asked to elaborate, he appeared to wrongly characterize their argument and didn’t address the most important issues they raised about his team’s work. Jackson also dismissed the second set of findings by Alex Nielsen of the Max Planck Institute for Gravitational Physics in Hannover, Germany, and three coauthors, whose paper appeared on the physics preprint site in November and is under review by the Journal of Cosmology and Astroparticle Physics. “We are in the process of writing a response to this latest paper,” Jackson wrote, so “I will not explain where they (once again) made their mistakes.”

“The Copenhagen group refuse to accept that they may be wrong,” Moffat said. “In fact, they are wrong.”

Experts say the problem came down to a combination of blunders: several by the Copenhagen physicists, and one by LIGO.

To help tease out the puny wiggle of a passing gravitational wave from a noisy background, LIGO’s algorithms constantly compare the lengths of the twin detectors’ arms, which oscillate when agitated by a passing gravitational wave or background noise, to “template waveforms” — possible gravitational-wave signals calculated from Einstein’s general theory of relativity. When there’s a close match between a signal detected in Hanford and one sensed shortly before or after in Livingston that also fits a template waveform, email alerts fly around the world.

The scientists then carefully determine the “best-fit” gravitational waveform that most closely matches the signal in the two detectors. When this waveform is subtracted from each of the signals, this leaves behind “noise residuals” — the remaining little wiggles in the detectors that should be uncorrelated, since the instruments are about 2,000 miles apart.

In their 2017 paper, the Copenhagen group claimed to have discovered that the noise in Livingston matched the noise in Hanford seven milliseconds later, just as the putative gravitational-wave signal arrived at both detectors. They interpreted this to mean that LIGO either hadn’t cleanly separated their signal from the noise, or correlations in the noise at exactly the right moment were responsible for the entire signal.

However, Green and Moffat identified a series of errors in the Copenhagen team’s data-handling that they say conspired to create a correlation that wasn’t really there.

To look for correlations in the residuals, Jackson and his colleagues picked a 20-millisecond segment of Livingston data and slid 20-millisecond segments of Hanford data across it, registering correlations whenever peaks overlapped with peaks and troughs with troughs. They found that strong correlations happened when the data was offset by seven milliseconds. But Green and Moffat noticed that when they took Jackson and colleagues’ code and reversed the procedure, fixing the Hanford noise data and sliding Livingston data segments across it, the correlation at seven-milliseconds offset went away. “This was a big red flag because it says, OK, you don’t have a calculational method that’s robust,” said Green, an expert in digital signal processing. Rather, the lengths of the data segments and their asymmetric treatment were “tuned to obtain a correlation signal at just about any desired time offset,” he said.

In a separate calculation, Jackson and his team seemed to find non-random, correlated patterns of peaks and troughs throughout the noise records in the two detectors. But Green and Moffat inferred that the Copenhagen physicists had not “windowed” the two sets of noise data. Windowing is a standard technique of smoothly dialing a signal to zero at the beginning and end of a segment of data before doing a mathematical operation called a “Fourier transform” that facilitates comparisons to other data. The Fourier transform treats a data segment as if it is cyclical, looping together the beginning and end. If the segment isn’t windowed, abrupt changes at the endpoints called “border distortions” can wind up looking like correlations when the data is compared with a second data set.

When Green and Moffat windowed the two sets of noise data, the claimed correlations went away. “Our concern is that the calculation that was done by the Copenhagen group was contrived to get the result they wanted to get,” Green said.

Lucy Reading-Ikkanda/Quanta Magazine. Sources: doi: 10.7935/K5MW2F23 (Gravitational Wave Signal); doi: 10.1088/1475-7516/2017/08/013 (2017 interpretation of noise); Duncan Brown (2018 interpretation of noise)

Nielsen and his coauthors — Alexander Nitz, Collin Capano and Duncan Brown — also concluded that the claimed correlation in the noise isn’t real, but they say the error can be attributed at least in part to LIGO’s mistake in providing the wrong data in the first figure of their 2016 discovery paper in Physical Review Letters.

Figure 1 is “the thing people have tattooed on their arms,” said Brown, a gravitational-wave astronomer at Syracuse University and a former LIGO member, who left the collaboration this year to pursue independent analyses of the data.

The figure’s top panel shows side-by-side squiggly lines representing the gravitational-wave signal detected in Livingston and Hanford. Below that are template waveforms closely matching the signals and, in the bottom panel, jagged lines representing the “noise residuals” in the two detectors, after the template waveform has been subtracted from each data set.

Brown explained that Jackson’s code, which he examined in detail during a visit to Copenhagen last year, detects an overlap in the residuals at seven milliseconds offset for a mundane reason: The template waveform shown in Figure 1 is not the “best-fit” waveform that LIGO actually used in its rigorous analysis. The figure was created for illustrative purposes, Brown and others explained. The figure-maker had matched a template waveform to the twin signals by eye, rather than using the best-fit signal as determined by careful calculations. Small imperfections in the subtracted waveform meant that there was some gravitational-wave signal left in both data sets that didn’t get subtracted off, and which ended up mixed in with the noise shown at the bottom of Figure 1—producing correlations that could be teased out by Jackson and colleagues’ algorithms. “What they discovered was an imperfect subtraction” of the signal waveform, Brown said. “When we subtract a better waveform than the one used in the PRL paper, we find no statistically significant residuals.”

“If LIGO did anything wrong,” he added, “it was not making it crystal-clear that pieces of that figure were illustrative and the detection claim is not based on that plot.” Jackson, however, accused LIGO scientists in an email of “misconduct” and making “the conscious decision not to inform the reader that they were violating one of the central canons of good scientific practice.”

Which is to blame, LIGO’s sloppy figure or the Copenhagen group’s faulty calculations? “In reality, I think it’s both,” Brown said. If Jackson and his colleagues were able to tune their parameters to create correlations at seven milliseconds offset, as Green and Moffat’s findings suggest, this would have essentially biased their calculations. Then, at the same offset, their biased algorithm picked out the imperfectly subtracted bits of signal in the noise, reinforcing the false impression.

Jackson, however, maintains that the unexplained correlations are present and says he and his colleagues are preparing a rebuttal to the recent work. He still thinks LIGO’s first, most powerful gravitational-wave signal (and all others by extension) might have been something else altogether — perhaps, he said, “a lightning strike in Burkina Faso, seismic, or even one of the mysterious ‘glitches’ that LIGO detectors see about once an hour.”

But both new papers reviewed and reanalyzed LIGO’s raw data and rediscovered the gravitational-wave signals within it, using different algorithms than LIGO’s. Other researchers have done the same.

“I think the pursuit of independent analyses of gravitational-wave data is a very important and valuable thing to do, and we are delighted that more people are getting involved,” said Shoemaker, LIGO’s spokesperson. “That the Jackson et al. work has stimulated some additional independent investigations can be seen as a positive outcome, but I personally think it comes with a fully unnecessary cost of ‘drama.’”

Visualizations of the 10 black hole collisions detected by LIGO so far, along with the gravitational-wave signals they produced.

Meanwhile, LIGO’s twin detectors, along with a third instrument in Europe called Virgo that switched on in 2017, have recorded 10 black hole collisions to date and one space-time wiggle from colliding neutron stars. Scientists announced the four latest black hole detections this month and released dazzling graphics showing the universe’s growing population of these mysterious, invisible, super-dense spheres. When the neutron-star collision was detected last year, 70 telescopes swiveled toward the fireworks; their observations indicated the cosmic origin of gold, the expansion rate of the universe and more.

Brown said it isn’t surprising that LIGO’s revolutionary discovery invited skepticism. A powerful event was detected “basically the day we turned it on,” he said, and the rate of black hole collisions in the cosmos has turned out to be at the high end of expectations.

“The universe loves gravitational-wave astronomers,” he said.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

More Great WIRED Stories