Gizmocrazed – Future Technology News Artificial Intelligence, Medical Breakthroughs, Virtual Reality Sat, 22 Sep 2018 14:53:31 +0000 en-US hourly 1 Space Photos of the Week: Shooting Stars and Dwarf Galaxies Sat, 22 Sep 2018 14:53:12 +0000 Space Photos of the Week: Shooting Stars and Dwarf Galaxies

Iron your space suit and polish your helmet, because this week we are are going intergalactic. Let’s begin by visiting a galaxy in a far-off constellation called Phoenix. This cosmic patch might look like a random arrangement of stars, and while the Phoenix Dwarf galaxy is a real galaxy, it’s still a bit … odd.

Next we sift through the debris of a comet called 21P/Giacobini-Zinner, which is responsible for the Draconic meteor shower in the October skies. Did you know that meteor showers are actually the Earth intersecting a comet’s tail? When tiny particles of ice and dust burn up in our atmosphere, they create what we know as shooting stars.

Now we try on our Parker Solar Probe glasses and take a look at the center of our galaxy, using the wide angle lens on the brand spankin’ new spacecraft. The probe, which is on a six-year journey to the Sun, tested out its instruments last week and ascertained that every instrument was running and ready for its mission. Take a gander at Parker’s first image, which features our home galaxy as well as a photobomb from Jupiter.

Finally, we check out an icy crater on Mars. The temperature drops low enough here that in the winter, the CO2 in the Martian atmosphere freezes into craters, making for some really cool (not to mention cold) photos.

Want to keep zooming around in space? Take a gander at all our photos here.

More Great WIRED Stories

Here's how to set up a VPN and protect your data Sat, 22 Sep 2018 05:32:17 +0000 Here's how to set up a VPN and protect your data

Every product here is independently selected by Mashable journalists. If you buy something featured, we may earn an affiliate commission which helps support our work.

Image: bob al-greene/mashable

On today’s internet, having at least some level of protection is essential.

A VPN or virtual private network is a tool to get an extra layer of protection — it essentially masks your connection while encrypting your data. The best part is you don’t need to get any physical hardware to use one; using a VPN is usually as simple as downloading and launching an app

Once you have a VPN setup, you can use it on your home WiFi, public networks, over LTE, and even while traveling. 

Certainly, the internet is a great way to connect with others, but users need to be careful now more than ever. A VPN will mask your IP address and give you a bit more security, especially for your viewing history — not to mention it makes using public WiFi networks much safer and can stop hackers from accessing data.

Here are the basic steps to take when choosing and setting up a VPN.

Plenty of options

Image: Christopher Burns/Unsplash

VPNs feel like a dime-a-dozen. There are plenty of options for consumers to pick from, but not all are equal. You’ll want a provider that doesn’t sell your information or store it so there’s less risk of your data getting into the wrong hands. Your ISP (internet service provider) logs your data and could use it for marketing purposes, or be compelled to give it up. One of the big reasons for using a VPN is to avoid that.

Mashable has recently reviewed IPVanish, NordVPN, and TunnelBear. All VPNs have pros and cons, but at the end of the day, a lot of it comes down to personal preference and price.

Getting started with a VPN

Image: charles poladian/mashable

Once you’ve downloaded your VPN, installed it on the device, created an account, and logged into it, you’re ready to go.

The app is command central. It lets you turn the VPN on and off, change settings (like speed limits), and pick your server location. Chances are it will suggest a location for you, maybe somewhere in the U.S. or another country. Depending on what you want to do, such as accessing your home Netflix library while traveling, you may want to change this.

Many VPNs can be set to auto-connect when you turn the device on and, on a PC or Mac, can even live in your menu or status bar. The menu-bar interface lets you change the location quickly, or even shut off the VPN immediately if need be — handy when you need to make a VoIP and FaceTime call, which are sometimes not supported by various VPNs since they use a lot of data. This easy access is available across most VPNs, including the three mentioned above.

Image: screenshot by jake krol/mashable

For many, it’s as simple as downloading the VPN app to your device. NordVPN has an app for most common platforms and even supports a few uncommon ones. Native apps support macOS, Windows, iOS, Android, Chrome, Firefox, Linux, and Android TV. 

Image: screenshot by jake krol/mashable

The typical experience within the application is signing in and choosing the location of your VPN. With Nord, you get a simple white and blue map (decorated with some boats), that provides you a visualization of the connection. Click the location of your choosing from the map or the sidebar to start the VPN. You can also choose from a list of specific VPN servers designed for different uses. For instance, if you need a dedicated IP address or want to route your traffic through two different servers instead of just one (called a double VPN — more on that in a sec), there is likely a preset in the application for that. 

A double VPN is useful if you want an extra layer of security by masking what you’re doing behind two different servers. While nothing is ever truly untraceable, this can make it pretty hard to track the activity. 

Once you’re signed in to the VPN and connected, you can minimize the app so it runs in the background. A significant benefit of these VPN services is that the processing power needed for them is minimal and it doesn’t need to be front and center. 

NordVPN also provides instructions for running it on specific routers. The advantage of running your VPN directly through your router is that it will cover all of your connected devices. However, going this route means you’ll need to make changes on the router side, which can be a little cumbersome for novice users. And while there’s a long list of routers that NordVPN provides instructions for, it doesn’t cover all models.

Multi-device support

Image: Christopher Gower/Unsplash

While you’ll usually want to use a VPN on a computer, many services offer apps for iOS, Android, and even some streaming devices. However, keep in mind a VPN doesn’t mean you get to bypass usage limits on LTE or a cellular network. If anything it will use more data. 

On an iOS or Android device, the setup is relatively easy, but you’ll need to download the respective app and sign on. You still get most of the settings of the desktop app, but just how many features translate to mobile will vary depending on both the VPN and your OS.

For those who rely on an iPad or Android tablet for traveling, and if you frequent public WiFi networks, a VPN lets you protect your data and stop bad actors from accessing your data, even on an open network, thanks to the VPN’s encryption.

In other words, a VPN can stop you from becoming a statistic, plus many of these apps offer kill switches. So, if you lose the VPN and your IP address is at risk of being exposed, it will stop internet access all together until you reconnect.

Set it up before you travel

Image: rawpixel/Unsplash

Before traveling, you should already have the VPN set up and working. In some countries, like China, you will have problems even downloading a VPN.

You should also definitely check with the VPN provider to make sure it works in the country you’re visiting, and that there are multiple servers available (We have a handy list of recommendations here).

So, a VPN shouldn’t be that frightening. These services provide a considerable level of value and protection. Moreover, for novices, setup should be as easy as logging in, downloading an app, and clicking go. More advanced users will cherish the ability to customize the experience through advanced options. Either way, you’ll be able to sleep a little easier knowing your internet activity is more secure than before.

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f86387%2f0820f233 c47b 4e6b b98b 695b7c3ed8b4

VCs say Silicon Valley isn’t the gold mine it used to be Sat, 22 Sep 2018 05:31:30 +0000 VCs say Silicon Valley isn’t the gold mine it used to be

In the days leading up to TechCrunch Disrupt SF 2018, The Economist published the cover story, ‘Why Startups Are Leaving Silicon Valley.’

The author outlined reasons why the Valley has “peaked.” Venture capital investors are deploying capital outside the Bay Area more than ever before. High-profile entrepreneurs and investors, Peter Thiel, for example, have left. Rising rents are making it impossible for new blood to make a living, let alone build businesses. And according to a recent survey, 46 percent of Bay Area residents want to get the hell out, an increase from 34 percent two years ago.

Needless to say, the future of Silicon Valley was top of mind on stage at Disrupt.

“It’s hard to make a difference in San Francisco as a single entrepreneur,” said J.D. Vance, the author of ‘Hillbilly Elegy’ and a managing partner at Revolution’s Rise of the Rest Fund, which backs seed-stage companies based outside Silicon Valley. “It’s not as a hard to make a difference as a successful entrepreneur in Columbus, Ohio.”

In conversation with Vance, Revolution CEO Steve Case said he’s noticed a “mega-trend” emerging. Founders from cities like Pittsburgh, Detroit or Portland are opting to stay in their hometowns instead of moving to U.S. innovation hubs like San Francisco.

“The sense that you have to be here or you can’t play is going to start diminishing.”

“We are seeing the beginnings of a slowing of what has been a brain drain the last 20 years,” Case said. “It’s not just watching where the capital flows, it’s watching where the talent flows. And the sense that you have to be here or you can’t play is going to start diminishing.”

Farewell, San Francisco

“It’s too expensive to live here,” said Aileen Lee, the founder of seed-stage VC firm Cowboy Ventures, amid a conversation with leading venture capitalists Spark Capital general partner Megan Quinn and Benchmark general partner Sarah Tavel.

“I know that there are a lot of people in the Bay Area that are trying to work on that problem and I hope that they are successful,” Lee added. “It’s an amazing place to live and we’ve made it really challenging for people to live here and not worry about making ends meet.”

One of Cowboy’s portfolio companies opted to relocate from Silicon Valley to Colorado when it came time to scale their business. That kind of move would’ve historically been seen as a failure. Today, it may be a sign of strong business acumen.

Quinn said that of all 28 of Spark’s growth-stage portfolio companies, Raleigh, North Carolina-based Pendo has the easiest time recruiting folks locally and from the Bay Area.

She advises her Bay Area-based late-stage companies to open a second office outside of the Valley where lower-cost talent is available.

“We often say go to [], draw a three-hour circle around San Francisco where they have direct flights, find a city that has a university and open up a second office as quickly as possible,” Quinn said.

Still, all three firms invest in a lot of companies based in San Francisco. Of Benchmark’s 10 most recent investments, for example, eight were based in SF, according to Crunchbase.

“I used to believe really strongly if you wanted to build a multi-billion dollar company you had to be based here,” Tavel said. “I’ve stopped giving that soap speech.”

Underestimated talent

A lot of Bay Area VCs have been blind to the droves of tech talent located outside the region. Believe it or not, there are great engineers in America’s small- and medium-sized markets too.

At Disrupt, Backstage Capital founder Arlan Hamilton announced the firm would launch an accelerator to further amplify companies led by underestimated founders. The program will have cohorts based in four cities; San Francisco was noticeably absent from that list.

Instead, the firm, which invests in underrepresented founders and recently raised a $36 million fund, will work with companies in Philadelphia, Los Angeles, London and one more city, which will be determined by a public vote. Aniyia Williams, the founder of Tinsel and Black & Brown Founders, will spearhead the Philadelphia effort.

“For us, it’s about closing that wealth gap to address inequity in tech,” Williams said. “There needs to be more active participation from everyone.”

Hamilton added that for her, the tech talent in LA and London is undeniable.

“There is a lot of money and a lot of investors … it reminds me of three years ago in Silicon Valley,” Hamilton said.

Silicon Valley vs. China

Silicon Valley’s demise may not be just as a result of increased costs of living or investors overlooking talent in other geographies. It may be because of heightened competition abroad.

Doug Leone, an early- and growth-stage investor at Sequoia Capital, said at Disrupt that he’s noticed a very different work ethic in China.

Chinese entrepreneurs, he explained, are more ruthless than their American counterparts and they’re putting in a whole lot more hours.

“I’ve had dinner in China until after 10 p.m. and people go to work after 10 p.m.,” Leone recalled.

“We don’t see that in the U.S. I’m not saying the U.S. founders oughta do that but those are the differences. They are similar in character. They are similar in dreams. They are similar in how they want to change the world. They are ultra-driven … The Chinese founders have a half other gear because I think they are a little more desperate.”

Much of this, however, has been said before and still, somehow, Silicon Valley remained the place to be for investors and startup entrepreneurs.

The reality is, those engaged in tech culture are always anxiously awaiting for the bubble to pop, the market to crash and for “peak Valley” to finally arrive.

Maybe, just maybe, Silicon Valley is forever.

Here’s more of our coverage of Disrupt 2018.

Researchers Have Finally Found Human Skeletal Stem Cells Sat, 22 Sep 2018 03:41:07 +0000 Researchers Have Finally Found Human Skeletal Stem Cells

A small bone structure that developed from human skeletal stem cells. Blue coloring indicates cartilage, brown represents bone marrow and yellow shows bone. (Credit: Chan and Longaker et al.)

If only we could regrow our broken bones like Harry Potter, Skele-gro style. Or, at the very least, heal up like a limb-regenerating newt. Alas, we humans possess no such abilities. Though our bodies can mend broken bones, the older we get, the shoddier that patch job gets. As for cartilage — the crucial cushioning that keeps our bones from rubbing together — once that’s gone, it’s gone for good.

But a new discovery by researchers could change that outlook. A team from Stanford University has finally discovered skeletal stem cells — the cells that give rise to bone, cartilage and the supportive, spongy inside of a bone called stroma — in humans for the first time. And the hope is that someday, doctors could use these stem cells to help people regrow broken bones and missing cartilage.

A Break in the Case

First, a quick primer on a couple of types of stem cells (of which there are many).

In general, stem cells have the ability to divide and develop into specialized variations, like blood or muscle cells. Embryonic stem cells, as you might’ve guessed, are found in embryos. They have the ability to develop into basically any type of cell in the body. Adult stem cells, on the other hand, aren’t as full of potential. They still develop into different cells, but usually those cell types are restricted to the organ from which they originate. For instance, neural stem cells reside in the brain, where they divide and ultimately spawn brain cells like neurons.

Before now, experts had only uncovered so-called mesenchymal cells in their hunt for human skeletal stem cells. Mesenchymal cells are adult stem cells that can develop into a number of other, more specialized cells like bone, cartilage, fat and muscle. But researchers still couldn’t home in specifically on skeletal stem cells.

In 2015, the Stanford team, led by Michael Longaker, a professor of plastic and reconstructive surgery at the university, got a break. They announced that they’d found skeletal stem cells in mice. To do this, they’d looked at mice that had been genetically engineered so that different stem cell subtypes in the mesenchymal mix all produced different colors. By having these subtypes color-coded, the team was able to track their lineage — their division and evolution into more specialized cells.

Virtually all cells have surface proteins, and sometimes, different species’ cells share similar surface proteins. So the team figured they’d just look at the proteins on mouse skeletal stem cells they’d discovered and look for similar ones on human mesenchymal cells. Then, they’d track the lineage of those cells to see if they evolved into specialized bone cells. Easy, right? Nope. They couldn’t find any human cells with enough surface proteins in common with what they’d found in mouse cells.

Switching Up Search Tactics

Instead, Longaker and his team did some reverse engineering. They looked at bones donated from fetal remains, specifically focusing on the still-growing ends, giving researchers a better chance of finding the stem cells they’d been hunting.

Since surface proteins were a no-go, they were on the lookout for cells with gene signatures similar to that of mouse skeletal stem cells. Once they found human cells that were a close enough match, they isolated them and popped them in a petri dish. And voila: The cells only grew new bits of bone, cartilage and spongy stroma. Though Longaker and company were pretty confident they’d found what they were looking for, they still had to make sure.

So, finally, the group turned to chunks of adult bone they’d acquired from people who’d received procedures like hip replacements. The group spotted cells with that same genetic profile and extracted them. And they, too, developed into bone, cartilage and stroma in petri dishes. The team published their findings in Cell.

Put It Into Practice

Now that we’ve found these stem cells, what’s next? Longaker hopes that within the next 10 years or so, doctors will be able to put these cells to use. “The United States has a rapidly aging population that undergoes almost 2 million joint replacements each year,” he says in a press release. “If we can use this stem cell for relatively noninvasive therapies, it could be a dream come true.”

Marc Benioff Bets on Cleanup Tech for Ocean Trash Sat, 22 Sep 2018 02:53:22 +0000 Marc Benioff Bets on Cleanup Tech for Ocean Trash


Marc Benioff, founder, chair, and co-CEO of Salesforce


Boyan Slat, founder of the Ocean Cleanup

As a boy tinkerer in the Netherlands, Boyan Slat made zip lines and, at age 14, set a Guinness World Record for launching the most water rockets—213 of them—at once. You know, typical kid stuff. In hindsight, Slat says, “I just didn’t have a real problem to work on.” Soon, he found one. In 2012, at age 18, he gave a TedX talk outlining a tantalizing way to filter plastic waste out of the oceans’ gyres, vortices where sea-junk tends to accumulate. A few months later, Slat dropped out of engineering school, founded the nonprofit Ocean Cleanup, and began to design in earnest.

The floater pipe for the Ocean Cleanup’s plastic-eating machine.

Michelle Groskopf

Slat’s goal was to build a system that uses ocean currents to push trash into a “passive collector,” which acts like an enormous lint trap, snaring everything from discarded fishing nets to scraps of plastic a few millimeters across. By the time you read this, the current prototype—a 600-meter-long, U-shaped floating tube suspending a stiff, 3-meter-deep screen­—should have already deployed to the Great Pacific Garbage Patch, a pair of swirling trash fields located between the US and Japan. If it works, then roughly every seven weeks, the trash will be taken out by boat and recycled.

That matters because, well, the oceans are currently a mess. Marc Benioff, who contributed to the $22 million that the Ocean Cleanup raised last year, calls the problem of plastic pollution “out of control.” Noting that plastics have been in widespread use for only about 50 years, he adds, “Where are we going to be in another 50?” Already, one of the Ocean Cleanup’s first projects—a 30-boat trawl and airborne survey of the GPGP—concluded that the patch contains around 80,000 metric tons of plastic, far more than previously believed. Still, in five years, Slat estimates, an array of 60 such tubes could remove almost half of it.

Not everyone is so sanguine. Experts warned that one of Slat’s early designs, involving seabed anchors, wouldn’t work (the team scrapped it) and fretted that biofouling—the gradual accumulation of kelp and slime—would cause the tube to sink (unlikely, according to the company’s tests, though they’re looking into the possibility of adding a coating to keep sea-stuff from attaching to the screen). Others noted that Slat’s approach, targeting the top 3 meters of the gyre, would do little to address the problem of micro­plastic, the tiny fragments that now suffuse the seas and get eaten by fish—and then by us. (He counters that the gyre’s surface contains the most plastic by weight, and also that removing larger pieces will prevent them from degrading into trillions of additional pieces of microplastic.)

Eventually, Slat hopes to expand the project to all five ocean gyres, with the option for corporations and private groups to sponsor part of the cleanup array. For now, though, he is focused on the beta test. The Pacific Ocean is a notoriously rough environment, and parts of the Garbage Patch are more than a thousand nautical miles offshore. “The goal, first, is to prove the technology,” Slat explains. “We’ve really tried to eliminate every possible risk, but the only way to be absolutely sure is to do it.”

This article appears in the October issue. Subscribe now.

Update 9-18-2018, 1:40 pm EDT: This story has been revised to correctly describe Marc Benioff’s contribution to the Ocean Cleanup.

MORE FROM WIRED@25: 1998-2003

Join us for a four-day celebration of our anniversary in San Francisco, October 12–15. From a robot petting zoo to provocative onstage conversations, you won’t want to miss it. More information at

How to Measure Things That Are Astronomically Far Away Fri, 21 Sep 2018 14:53:14 +0000 How to Measure Things That Are Astronomically Far Away

If you want to find the size of a basketball, you can use a normal meter stick to measure the diameter. You should get a value of around 0.24 meters. Please don’t use inches—they are just harder to deal with. Anyway, you probably aren’t using Imperial units since there are only three countries that officially use this system: Myanmar, Liberia, and… the United States. It’s time to move to the metric system like everyone else.

But what if you want the distance from New York to Los Angeles? Sure, you can still use meters with a distance of about 3.93 x 106 meters or you could use kilometers (3,930 km). But really, kilometers is just a nice way of using meters. It’s the same unit of distance, just with a prefix. Units of meters (or kilometers) works well enough for things as big as the Earth, with a radius of about 6.37 x 106 meters.

However, outside of the Earth stuff starts getting super big. With very large things it’s often useful to use very large distance units. Let’s go over the three most common distance units in astronomy.

The Astronomical Unit

The name of this unit sort of makes it sound more important than it is—it’s still important, but not for the rest of the universe. In short, the Astronomical Unit (AU) is the distance from the Earth to the Sun. That’s not technically correct since the Earth’s orbit around the Sun isn’t perfectly circular. Let’s just say the AU is the average distance to the Sun—that will work for now.

With the AU, it’s much easier to measure distances in the solar system. For instance, the distance from the Sun to Mars is about 1.52 AU and the distance to Pluto is around 40 AU. But there is an even better reason to describe distances in AU than just convenience. Humans first used the Astronomical Unit because we didn’t know the distance from the Earth to the Sun. Yes, that sounds crazy, but it’s true.

So, here’s the deal. The ancient Greeks did some awesome measurements of the Earth and moon (and they tried to get the distance to the Sun)—but that one’s pretty tough. But even without an accurate value for the Sun-Earth distance, later astronomers could still do some nice modeling of the solar system. In fact Johannes Kepler found the the time it takes a planet to orbit the Sun was proportional to its distance to the Sun (again, technically these orbits are ellipses). Using this, he determined the distance from other planets to the Sun in terms of the Earth’s distance. Boom—that gets you the distance in AU.

Of course no one wants to stop and leave all the solar system stuff in terms of AU. We really want the conversion factor between AU and meters. To get this, you need to actually measure the Earth-Sun distance. That’s not such an easy task, but there is one way to get a reasonable value—use the transit of Venus. This happens when the planet Venus passes between the Earth and the Sun (it doesn’t happen as often as you would think). By measuring the exact start and finish time of the transit from different parts of the Earth you can get a value for AU in terms of the size of the Earth (which we mostly know). Here are all the details of that calculation in case you are interested.

In the end, we have a Earth-Sun distance of about 1.496 x 1011 meters. Yes, that’s pretty big.

The Parsec

How far is the closest star? That would be Alpha Centauri at a distance of 2.67 x 10^5 AU (you can convert that to meters for homework). So you see we are in the same problem again. It might make more sense to use a distance unit that doesn’t involve ginormous numbers. That’s where the parsec comes in.

The parsec depends on one big idea—parallax. Let’s start with a simple experiment you can do at home. Hold your arm out straight in front of you with your thumb sticking up. Don’t worry about looking silly, here—I will do it too.

Now look at your thumb and close one eye (it might help to also say “camera one”). With one eye closed, what in the background does your thumb line up with? It doesn’t matter, just realize that it is somewhere. Next, switch eyes (and say “camera two”)—but don’t move your thumb. You should notice that your thumb’s position with respect to the background changes. This is parallax. It is the apparent change in position of an object when viewed from a different location. The closer the object is to your face, the greater the apparent change. Oh, this is part of the way augmented reality in the iOS ARKit works.

If you want to calculate the distance to an object, you can find it with the size of the angular shift and the distance between the two viewing points with the following equation (assuming the distance to the object is much greater than the distance between observations):

Oh, you need that angle measured in radians (not degrees). You can see that in order to get measurable angular shifts, you need a pretty big change in observation locations for things like a star (super far away). What if we observe an object from the Earth on one side of the Sun and then 6 months later on the other side? In that case, a star would give a small angular shift. Like this:

With the known distance from the Earth to the Sun (yes, we need that distance still) and the angular shift of a star then we can calculate the distance to the star. Yes, this also depends on other stars that are super far away so that they don’t move too much. If all the stars were the same distance from our Sun, it would be difficult to measure the angular shift.

Now for the parsec. This is defined such that 1 parsec is the distance a star needs to be such that it has an apparent angular shift of 1 arc second of a degree. Let’s find the conversion of parsecs to AU—just for fun.

Step one is to get the angular shift of 1 arc second in radians.

The rest is simple. Just take 1 AU divided by this angular shift. If you put it in your calculator you get 2.06 x 10^5 AU. Go ahead and repeat this for the conversion between parsec and meters. It will be fun.

The Light-year

Parsecs are cool. They sound so cool that you could use them in a space movie but use it as a time and not a distance (since it sounds like a distance). Then 40 years later, you could make another movie that somehow justifies the incorrect use of the parsec. That would be awesome (hint—I’m a huge Star Wars fan).

But wait. There is another distance unit that sounds like a time. It’s the light-year. Yes, a year is a unit of time but the light-year is a unit of distance. It is defined as the distance light travels in one year.

The speed of light is both finite and constant with a value of approximately 2.998 x 108 m/s. The distance light travels in a certain amount of time can be found with the definition of velocity (in one dimension):

Calculating the size of a light-year means finding the time interval (Δt) in units of seconds instead of years since the speed is in meters per second. I skipped the part where I convert 1 year into seconds, but after that I can calculate the conversion between light-years and meters.

How about this? What if you convert 1 AU into light-years? I will leave the math as a homework problem for you, but the answer is 1.58 x 10-5 light-years. This is the same as 8.3 light-minutes. Think about that. It takes light 8 minutes to go from the Sun to the Earth. Or how about this? Jupiter is around 40 light-minutes away from the Earth (distance varies). So, when you look at Jupiter in the night sky, you are actually looking at it in the past. Forty minutes in the past. Your eyes are a time machine.

The farther away we look, the deeper into the past we look. Even for things very close, like your computer screen, you are looking at it in the past (very near past). Since light takes a finite time to travel and since we see with light—you are looking in the past.

That’s what makes the light-year unit so appropriate for astronomy. When we look at a galaxy that is 10 billion light-years away, we are looking 10 billion years into the past. Awesome.

More Great WIRED Stories

Instagram says it's not testing or building a reposting feature Fri, 21 Sep 2018 05:32:46 +0000 Instagram could be looking to make sharing other people's posts easier. But is that such a good thing?Instagram could be looking to make sharing other people’s posts easier. But is that such a good thing?

Image: mashable/lili sams

Instagram is reportedly testing new features that could dramatically change what your feed looks like.

As first reported by The Verge, the company is looking to introduce native reposting, which will allow users to share posts from other accounts to your own feed.

An Instagram spokesperson, however, told Mashable that it is not a feature the company is currently building or testing.

According to The Verge, who viewed two screenshots of the feature, the “seamless sharing” feature will introduce a “share to feed” button when you open the “…” menu in the top right corner of a post. Currently, users need to use a third-party app to repost.

It’s a feature that has long been the subject of rumours, but the company has been reluctant on rolling out the feature which has long been a mainstay of Twitter and Facebook. In June, the platform rolled out story sharing, which allows users to repost stories that they’re tagged in.

Mike Krieger, Instagram’s co-founder, noted to Bloomberg earlier this year that sharing might not go down well with users.

“It would make people feel like the content in their feed was not what they had chosen,” he told the publication.

Allowing a quick way to repost natively could be problematic, as we’ve seen with Facebook and Twitter’s efforts to quell fake news and influence campaigns, which can quickly spread on those networks. 

Instagram is a sanctuary from that kind of content, but it might not be for long.

Instagram’s also sorting out hashtag cramming

Aside from native reposting, Instagram is also reportedly testing other features including separating hashtags from captions, geofenced sharing, quiz stickers, and stories highlight stickers, according to TechCrunch.

The separation of hashtags from captions could be the panacea to the annoying glut of hashtags (like #picoftheday and #instagood), often cluttered at the bottom of a post.

Spotted by Jane Manchun Wong inside Instagram’s Android app, a separate button called “Add Hashtags” will appear on the New Post screen, which will allow users to use hashtags without including them in the caption. 

Also discovered by Wong is the ability to choose which locations can see your post or story, where you can select or omit specific countries as you wish.

Finally, another find by Wong is the ability to share other accounts’ Stories Highlights as a sticker in your own story. An Instagram spokesperson confirmed it was testing this feature, but it would only work with public highlights or your own highlights.

Also found by Twitter user @WABetaInfo is the introduction of quiz stickers to stories, which gives your followers three options to choose from.

UPDATE: Sept. 21, 2018, 12:49 p.m. AEST Added an update from Instagram.

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f86652%2fd77983a6 a32a 42a0 b26e 62d45ad9e8a7

Cleo, the ‘digital assistant’ that replaces your banking apps, picks up $10M Series A led by Balderton Fri, 21 Sep 2018 05:31:42 +0000 When Cleo, the London-based “digital assistant” that wants to replace your banking apps, quietly entered the U.S., the company couldn’t have expected to be an instant hit. Many better-funded British startups have failed to “break America.” However, just four months later, the fintech upstart counts 350,000 users across the pond — claiming more than 600,000 active users in the U.K., U.S. and Canada in total — and says it is adding 30,000 new signups each week. All of which hasn’t gone unnoticed by investors.

Already backed by some of the biggest VC names in the London tech scene — including Entrepreneur First, Moonfruit founder Wendy Tan White, Skype founder Niklas Zennström, Wonga founder Errol Damelin, TransferWise founder Taavet Hinrikus and LocalGlobe — Cleo is adding Balderton Capital to the list.

The European venture capital firm, which has previously invested in fintech unicorn Revolut and the well-established GoCardless, has led Cleo’s $10 million Series A round, in which I understand most early backers, including Zennström, also followed on. One source told me the Series A gives the hot London startup a post-money valuation of around £30 million (~$39.7m), although Cleo declined to comment.

In a call with co-founder and CEO Barney Hussey-Yeo, he explained that the new capital will be used to continue scaling the company, with further international expansion the name of the game. Hussey-Yeo says Cleo will be targeting Western Europe, the Americas and Australasia, aiming to launch in a whopping 22 countries in the next 12 months, as Cleo bids to become the “default interface” for millennials interacting and managing their money.

Primarily accessed via Facebook Messenger, the AI-powered chatbot gives insights into your spending across multiple accounts and credit cards, broken down by transaction, category or merchant. In addition, Cleo lets you take a number of actions based on the financial data it has gleaned. You can choose to put money aside for a rainy day or specific goal, send money to your Facebook Messenger contacts, donate to charity, set spending alerts and more.

However, in the context of traction and Cleo’s broader global ambitions, it is the decision not to become a bank in its own right that Hussey-Yeo feels is really beginning to bear fruit. His argument has always been that you don’t need to be a bank to become the primary way users interface with their finances, and that without the regulatory and capital burden that becoming a fully licensed bank brings, you can scale much more quickly. I have a feeling that strategy — and its pros and cons — has a long way to play out just yet.

This may be as close as you can come to going on a spacewalk 240-ish miles above Earth Fri, 21 Sep 2018 05:31:35 +0000 The vertiginous video also offers an opportunity to consider theories posited by two of the giants of science

While on a spacewalk outside the International Space Station over Mexico, NASA astronaut Randy Bresnik captured this spectacular, vertiginous video with a GoPro camera.

I spotted it in a NASA Tweet yesterday, and when I watched it, I really did have the sensation that this would be as close as I’ll ever come to experiencing free-falling around the Earth. (Short of a virtual reality video, that is.)

Bresnik shot the video awhile ago — on Oct. 20, 2017, while on one of three spacewalks during his mission totaling more than 20 hours. So this isn’t exactly breaking news. But I figured that there would be others who had’t seen it until now. I also got to thinking that it offered an opportunity to talk about the phenomenon of free-falling around the Earth — in other words, orbiting.

Let’s start with a simple ‘what if’ scenario: Imagine Earth’s gravity suddenly disappearing while Bresnik was on his spacewalk. I think you can easily picture what would have happened: Both he and the space station itself would have shot off in a straight line out into space.

But of course, our planet’s gravitational field continued to pull on both, causing them to fall — right toward the Earth. But as they did, Earth’s curved surface fell away at the same rate. So instead of falling to the ground, they continued falling around the Earth.

And this is, of course, is what it means to be in orbit.

I’ve always taken it for granted that an astronaut, a small tethered tool he or she may be working with, and the space station itself, would free-fall like this at the very same rate and therefore stick together. But if you stop to think about it for a moment, that’s not necessarily intuitive. You could imagine that the much more massive space station (on the ground it would weigh in at about 925,000 pounds) would fall much faster than a far more diminutive astronaut.

For millennia, people subscribed to the common sense notion that heavier objects do fall faster than lighter ones. In fact, Aristotle himself believed that objects fall at a speed proportional to their weight.

But then Galileo Galilei came along and upset the apple cart, observing that bigger pieces of fruit fell to the ground at the same rate as smaller ones.

Okay, I’m being silly. But in the 1500s, Galileo proposed what has come to be called the “equivalence principle”: The rate at which falling objects drop is independent of how much they weigh.

Galileo first arrived at this idea through a thought experiment that he outlined in his book “On Motion.” And then, as we all learned in elementary school (or should have, at any rate!), he allegedly tested it by dropping objects of different weights from the Leaning Tower of Pisa.

That story is apocryphal. Maybe it happened, maybe not. But Galileo really did test his theory by rolling objects of different weights down inclined planes. And sure enough, he observed that they all fell at the same rate.

In 1971, Apollo 15 astronaut Dave Scott famously gave the equivalence principle another test — on live TV during a walk on the Moon.

As you’ll see when you watch the video above, he held out a geologic hammer and a feather and dropped them at the same time. They were essentially in a vacuum, which meant there was no air resistance to affect the experiment.

Even though I knew the outcome before watching the video, it still gave me a thrill: The hammer and the feather hit the Moon dirt at precisely the same moment.

Flash forward to 2016. A French satellite called MICROSCOPE was tasked with carrying out a far more precise experiment while orbiting Earth.

It involved concentric cylindrical shells a few centimeters long but of different masses. Since both objects were in orbit with the spacecraft, they were free falling around the Earth. If Galileo was right, then the concentric shells should fall at exactly the same rate under gravity.

And actually, another epochal theory was being tested too: Einstein’s general theory of relativity. It also dictated that the two objects should fall at the same rate despite their different masses.

Over the course of more than 1,500 orbits around Earth, extremely precise detectors checked to see whether there were any deviations in the rate at which the cylinders fell. The result? According to a story in the journal Science:

. . . no discrepancy in the acceleration of two small test masses to about one part in 100 trillion (1014).That’s more than 10 times better than the most sensitive ground-based experiments, which look for disparities in the response of weights to Earth’s spin.

It is the most precise confirmation yet of the equivalence principle.

Energy to Burn Fri, 21 Sep 2018 05:30:23 +0000