Dick Costolo told reporters here when asked how Facebook’s purchase will affect competition in the mobile Internet scene.
“I think that sometimes there is a tendency for companies to react to events in the marketplace that are inconsistent with their strategy….and I think that tendency is a mistake.”
He noted how tech players rushed to buy video-sharing sites after Google Inc. bought YouTube for $1.6 billion in 2006, but that many of those bets ultimately fizzled. He said.
“You can look at all sorts of other similar cases in the past when an event like this happens and people try to react to it. Copying it is never a good idea, at least history would say it’s not a good idea.”
Mr. Costolo added.
“We will make sure that we execute on the strategy that we have and not one that’s been laid down for us based on events that happen in the marketplace.”
Part of that strategy the chief executive outlined on Monday is to focus on evolving its API, or application programming interface, so other companies can build products into Twitter much like how sellers push their wares on Amazon.com, he says. An API lets developers create programs that interact with Twitter. Twitter’s API has enabled thousands of applications created by third parties, many of which use Twitter to converse with other websites.
Mr. Costolo is on a two-day trip Japan to meet with local staff and Economy Minister Motohisa Furukawa as the office here looks to “aggressively” bulk up in the months ahead. The company did not disclose the current headcount figure, but said it has made some significant hires in the engineering and sales side recently. The company is trying to keep up with the booming user base in Japan, where growth is outstripping by “several percent” the rapid growth worldwide.
Japan had the third-largest slice of Twitter users in the world based on the number of accounts created before Jan. 1, according to Semiocast, a social-media research firm. Japan’s 29.9 million accounts followed behind U.S. and Brazil — but its users are more active. Some 30% of Japanese accounts posted a message during the three-month period ended November 30 compared to the global average of 27%, concludes Semiocast. And Japanese remains the second-most used language on Twitter after English.
Mr. Costolo, who says this is his third trip to Japan since becoming chief executive in 2010, pointed out that Japanese users managed to smash four Twitter records. Japan most recently set a new world record during the annual broadcast of Hayao Miyazaki’s 1986 animated classic “Castle in The Sky” in December when about 25,000 tweets were blasted out per second. Japanese users first entered Twitter’s history books back in the summer of 2010, when soccer fans tapped out 3,283 “mumbles” – the term used in Japan for tweets — per second when the national men’s soccer team defeated Denmark at the World Cup in South Africa.
Mr. Costolo plans to discuss how features on the microblogging site can be enhanced for disaster situations, based on lessons learned from the March 11 earthquake and tsunami last year. Twitter’s Japan office, which will be the command center for this project, has already started to work with the U.K. government on utilizing Twitter during national emergencies.
COMMENTARY: I agree 100% with Dick Costolo's decision not to emulate Facebook by acquiring a photosharing site of its own. Part of the problem today is that CEO's overreact, get caught up in all the hype of what competitors are doing and end up emulating their competitors needlessly without thinking that decision thoroughly. CEO's should make methodical, purposeful and well thoughout moves consistent with their overall business strategy.
CEO's should stay focused and true to their original goals and initiatives. Like I said in my previous blog post about Facebook's acquisition of Instagram, it was more about acquiring valuable data about mobile users, than just the technology itself. Don't get me wrong, Instagram has a superior photo sharing technology at this point, but it is not legions ahead. In time competitors will emulate that technology. It could even be one of Twitter's developers.
Marilyn Monroe in her famous dress skirt scene from the film "The Seven Year Itch" (Click Image To Enlarge)
L.A.'s Otoy promises a processing and image capture breakthrough that will allow actors to play their current age indefinitely. But the tech also opens up a range of possibilities for other image-intensive applications. If Otoy’s founders are right, Star Trek’s holodeck isn’t far behind.
Legendary talent agent Ari Emanuel, who is encouraging his WME clients to digitally scan their faces with a technology that allows them to act in roles at their current age for the rest of their life says.
"They never get old."
While the technology to digitally archive a celebrity’s face and overlay it on a younger actor has been around for years, the expensive storage and computing power necessary to render the mammoth data files limited the technology to the fleeting needs of big-budget blockbusters. Now, Otoy, a hidden gem of a startup tucked away in Los Angeles, has solved the processing and storage problem with a breakthrough in processing power, making it economically viable to archive the appearances of actors en masse in their own private bank of youth.
How it Works
Capturing a realistic representation of a face isn’t as simple as snapping a picture in good light. says Otoy’s Academy Award-winning technologist, Tim Hawkins says.
"Skin is a unique material. It’s a little bit like a cloud,--a mesh of tissue and blood vessels reflecting light in a way that gives facial complexion a textured luminosity, over patches of bumpy skin and subtle shadows. Indeed, it’s the lack of detail that gives CGI-created faces a suspicious sense of unrealistic perfection, tipping them into the dreaded "uncanny valley."
Otoy’s solution is to bask a human face in 360 degrees of bright light, which allows a computer to recreate the effects of light at any angle and any intensity of luminosity, from an early-morning sunrise to a full moon. Actors step into a large hollow sphere, surrounded by dozens of high-wattage bulbs. Six high-resolution professional cameras stationed in four corners at eye-level snap photos, as a series of light patterns is projected onto the actor’s face. The surreal, eye-tearing experience only takes about five minutes to capture a blank stare expression (see a video of me unsuccessfully trying to keep my eyes still during the process below).
The magic of that capture technology, LightStage, is how a single actor, Armie Hammer, played both Winklevoss twins simultaneously in the Facebook biography, The Social Network (see before and after photos of Hammer’s LightStage-captured face overlayed on his body-double below).
Click Image To Enlarge
Should an actor want to express more than just a blank stare, the LightStage can capture facial expressions of all contortions. Running through the full catalog of human expressions, the Facial Action Coding System, users act out every possible dramatic and silly expression, as LightStage captures facial muscles stretched in enough ways that a computer can "puppeteer" any emotion in the future.
A Brilliant New Technology
The prodigy behind the technology is Otoy CEO, Jules Urbach, a self-taught computer programmer who designed the software that super-charges a cheap graphics card with the rendering power of a supercomputer. Instead of processing tasks one a time, Otoy’s software opens up the computing pipeline like a multi-lane highway, permitting multiple tasks simultaneously (what programmers refer to as "parallel processing"). Without Otoy’s tech, a supercomputer "typically spends dozens of hours per machine to render just a single [frame] on films likeTransformers," explains Otoy President, Alissa Grainger, who first caught up with Fast Company at Singularity University’s executive training conference in Los Angeles. At Otoy’s ever-expanding headquarters in downtown L.A., I witnessed Transformers-quality rendering in real time on a iPad, streaming from their cloud servers over a Wi-Fi connection.
Even with this impressive improvement in processing power, tech-savvy readers will rightly call out that even a blazing fast Internet connection couldn’t possibly download the huge data file of a cinema-quality image in real time. True. So, Urbach also designed a new data compression algorithm that scrunches the data "several hundreds" times smaller than, for instance, what Sony Image Works used to store CGI from the Spider-Man movies, according to Grainger.
Otoy previously made headlines when it proved what was thought to be impossible, streaming an Xbox game seamlessly between different types of devices.
The implications of this technology are far-reaching. For instance, Urbach estimates that his compressed streaming algorithm could cut Netflix bandwidth needs by roughly half. Given that Netflix hogs up to 32% all all U.S. bandwidth, Otoy could potentially free up a sizable chunk of Internet, if it were to partner with the biggest names in video streaming.
A Business of Possibilities
With the processing and storage problem solved, Otoy’s hole-in-the-wall LightStage studio in Burbank has already become a conveyor belt of A-list celebrities and athletes seeking its digital fountain of youth. Though Urbach is insistent that facial scans be the intellectual property of each individual person, clients still need Otoy’s patented technology to store and stream their digital doubles in manageable chunks. As a result, Otoy has, overnight, become the only business in town for this kind of service, and has attracted some of Silicon Valley’s top investors for a trip down South.
But, for the Otoy team, the real magic of digital doubles is yet to be realized. Legendary actors such as Tom Hanks would be able to play younger parts years after receiving their Medicare cards. Young actors could licenses out their likeness to magazines, rather than have to churn out photo shoots and profiles during the grueling promotion of an upcoming film. Even deceased actors could be digitally resurrected.
Otoy’s biggest business potential may not be in serving the Hollywood elite, but in democratizing access to supercomputing power for the growing industry of web, low-budget, and amateur filmmaking. The company recently acquired a popular rendering software, Octane, and revealed to Fast Company that it is offering up its LightStage data and real-time rendering power as a cloud service, complete with plug-ins for the widely used production software of Autodesk, including Autodesk Maya.
Ultimately, the dream for Otoy’s founder is a Star Trek-like holodeck, where a 3-D virtual environment looks as realistic as the analog world. Such a breakthrough would require more than just LightStage. Otoy is tackling this dream one chunk at a time, and we’ll have more details soon as it releases technology that could disrupt the entertainment, app, PC, and video game industries. Stay tuned.
COMMENTARY: Otoy's LightStage digital capturing and data and real-time rendering is quite an incredible technological achievement. The above videos didn't really demonstrate what the final product looks like. Here's one from Siggraph 2009 showing an individual performer being scanned, then his digitized image "duplicated" several times over, and the image rendered onto a real-world scene. The likeness from the original individual is quite astounding.
To be able to extend the youthfulness of an actor and use his likeness in films twenty years from now would be quite incredible. Imagine if Otoy LightStage technology were available 50 years ago, actors like Marilyn Monroe, Humphrey Bogart, James Cagney, Robert Redford, James Stewart, Kim Novak, Paul Newman, Gregory Peck, John Wayne, Steve McQueen, Gene Kelly, Elizabeth Taylor and many other of my favorite actors from that era could appear in today's films and appear just like they looked back in their prime. I don't know how their voices would be duplicated, but it would be a welcome change from some of the actors that appear in some of the films of today.
INSPIRED BY "CINEMAGRAPHS," FLIXEL LETS YOU SNAP PHOTOS AND TRANSFORM THEM INTO HIP ANIMATIONS WITH JUST A FEW SWIPES.
If you’re as big a fan of Kevin Burg and Jamie Beck’s animated-GIF "Cinemagraphs" as we are, you’ve probably wondered: How could I make some of those myself? To use Apple’s trademarked-but-annoyingly-useful phrase, there’s an app for that. It’s called Flixel, and it transforms your iPhone’s camera into a Cinemagraph-making marvel.
Flixel co-founder Mark Homza told Co.Design.
"We were so enthralled by Cinemagraphs but burdened by the complexity and time required to create them. With Flixel, we wanted to propose a creative experience that blended simplicity, artistic integrity, and pushed the boundaries of iPhone imagery."
Indeed, part of what made Burg and Beck’s Cinemagraphs so bewitching was their subtlety -- and the technical skill that no doubt went into achieving it. How can you automate and package that process into an app that any schmoe can use?
Flixel animated image of a train moving past a passenger platform (Click Image To Enlarge)
Amazingly, Flixel pulls this feat off. Simply snap a photo just like you would normally, and the magic elves inside the app capture a handful of video frames, process them, and even image-stabilize everything for you. But the real genius of Flixel’s interaction design reveals itself when it’s time to animate the GIF. Rubbing your fingertip over the image animates just that portion of the frame, so you can create subtle effects like a candle flame flickering or a cat twitching its tail. (Note: Flixel didn’t invent this clever interaction, but does refine it in comparison to similar apps like Kinotopicand Cinemagram.) If you want to get fancy, you can choose starting and ending frames for your animation, decide to repeat it or loop it back and forth (the latter avoids distracting "jump cuts"--a nice touch), and even apply Instagram-esque filters (some of which cost money--well played, guys).
The app bungles the "first impression" user experience a bit by displaying a social-network-like feed of other people’s Flixels when you launch the app. (I’d have preferred to see the camera function as the default launch screen--I don’t want to miss capturing any Cinemagraph-worthy moments.) But other than that minor quibble, using Flixel is a delight. The results aren’t as pristine-looking as Burg and Bell’s Cinemagraphs, but the ease of making them with Flixel far outweighs any other concerns. It’s the first photo-enhancing app I’ve seen since Instagram to really add unique value to the cameraphone experience. Your first Flixel might be crude, but you’ll have so much fun doing it that you’ll immediately want to make another one. And another, and another. Cat photos, baby pics, and party shots may never be the same.
Flixel of someone holding a camera and taking pictures while another individual uses his smartphone (Click Image To Enlarge)
So is Flixel just a crass "product-ization" of Bell and Burg’s innovative art form? To their credit, Mark Homza and CEO Phillipe LeBlanc acknowledge right on Flixel’s homepage that the app was "inspired by Cinemagraphs." Homza says.
"The app in a way, is an homage to [Bell and Burg’s] work. The goal is to propagate the art form and make it accessible to a mainstream audience. It would be an honour to work with them and get their feedback."
Despite these good intentions, some will inevitably say Flixel is a ripoff. Others--like me--will say even if it is, who cares? This app is awesome.
Update: Some of our commenters have mentioned that if Flixel is "a ripoff" of anything, it’s earlier apps like Cinemagram and Kinotopic. I checked both of them out and while the interface conventions of these apps are very similar, Flixel’s feels uniquely well-designed. Kinotopic forces you to jump through account-setup hoops before you can even experiment with the camera--a big UX fail. And Cinemagram’s interface, while responsive, is rough and one-dimensional compared to Flixel. Cinemagram is presented as a video app rather than a photo app, so it lets you record long clips--and forces you to throw most of that material away before creating an animated GIF. (Then why let me record that much in the first place?) Flixel’s snapshot-like UX makes more sense: you capture a photo--one moment--and paint video-like effects onto it in a nonlinear, opt-in interface (versus Cinemagram’s card-like interface, which pushes you through every step, including optional afterthoughts like color filtration, whether you want to or not).
Animated image of snow falling on the ground using Cinemagram's app (Click Image To Enlarge)
Animated image of a cat scratching his ear using Knotopic's app (Click Image To Enlarge)
COMMENTARY: That's what I call a cool iPhone app. Wish I could tryout this new on-the-fly GIF-making app. Too bad that I'm a loyal BlackBerry phone user, have been for years.
Courtesy of an article dated March 26, 2012 appearing in Fast Company Design
In the 1960s, this was the procedure for taking satellite photos of Earth: 1. Launch satellite. 2. Satellite automatically takes photos on film. 3. Satellite ejects completed roll of film which falls into the Pacific. 4. Air Force attempts to catch canister mid-fall. Failing that, Navy recovers it from the ocean.
I tell you this story to bring into focus the quotidian miracle that is Google Maps’ satellite view. Paul Rademachersays.
"We don’t usually stop and marvel at it. We only run across it while accomplishing some other task. When we’re looking up driving directions or some place we just heard about on the news, the imagery is secondary. This is why he made Stratocam".
The site is built on top of the Google Maps API, but to emphasize the images, the interface has been stripped down. You can advance back and forth through a slide show of images others have found, you can up or down vote what you see and you can navigate anywhere on the planet to take a snapshot of your own. Rademacher says the biggest challenge was finding a good balance of simplicity and features.
"The site is a sit-back slideshow, but also a voting game, and a still-photography app."
Click Images To Enlarge
The best shots are surprisingly zoomed in--views you’d likely never find in your own random browsing. Superficially, the project invites comparison to Yann Arthus-Bertrand’s Earth from Above. Both projects share a perspective. Both tend to be attracted to the same types of subjects--there’s a lot of striking patterns, industrial and exotic landscapes.
For my money, Stratocam has more in common with Jon Rafman’s The Nine Eyes of Google Street View. Like Nine Eyes, Stratocam divorces the moment of framing from the moment of capture in photography. Automated processes capture the images, but it’s not until people come along and decide which to emphasize and which parts to ignore that we begin to see an artist’s eye. Rademacher says.
"The Google Maps satellite image is a single photograph that stretches over the entire globe. Thousands of people could pore over it and still not discover every highlight."
So far, people have contributed over 10,000 shots.
Click Images To Enlarge
Stratocam is part of a larger obsession with maps for Rademacher. He says.
"I love maps because they’re the connection between an abstract concept and the real world. A city becomes real once you see how its streets are laid out."
COMMENTARY: Now that's what I call beautiful maps of the real world. I hope we will finally be able to see images of Area 51 and Dulce, New Mexico. Maybe we'll be able to pickup images of UFO's or E.T. himself.
Courtesy of an article dated March 26, 2012 appearing in Fast Company Design
Lytro's amazing "light field" camera tech has everyone from pro photogs to casual clickers abuzz. But the innovations now en route suggest that the company's best is yet to come.
That video above explains the new photography that Lytro is bringing the world. Click anywhere in the photos it produces and you'll instantly refocus the image at that point. And you can zoom in, too.
The beauty part? It's no million-dollar, Matrix-like special effect--just an image snapped in a single second, like any other digital photo. The technology behind it--plenoptics or light field imaging--got the tech press all excited about the "focus-free" powers of the coming Lytro camera. Then, when Lytro revealed the product, buzz picked up anew. The thing is, the real excitement about this technology is yet to come. And when it does, that's when everything about imaging may truly change.
Most of us grabbed onto the tech's easiest aspect when they first heard about Lytro--the fact that unlike current camera technology, making a photograph doesn't require you to focus on something at the moment of snapping. This has meaningful implications, from new ways to consume photos to a faster time between turning a camera on and snapping a photo. But this misses a huge amount of technology and science in the invention.
The design
Some of the remarkable science is revealed in the first Lytro cameras, yet to go on sale, but which were shown last month. The unusual minimalist form of the units, unlike any other camera you've ever seen, highlights that this is a new photography type. The format of the zoom lenses and the plenoptic array that's the secret to capturing all the light "rays" from a scene rather than light falling on pixels (as in a typical digital camera) more or less shaped the design. And the final third of the unit, which contains the complex electronics that processes the data (which in development required hundreds of cameras hooked up to a supercomputer) ends with a touchscreen that instantly lets you try out the effect.
Speaking with Fast Company Lytro CEO Ren Ng noted this was a bold decision, party shaped by a desire to be "open to whatever the [design] form might take, as long as it serves the end photographic goal...and to focus more on the end goal of the photographer." Because Lytro is speaking a new language of photography, it let the company make this design move in a way that perhaps other firms can't because they're locked into a design choice and an extensive userbase that is already familiar and comfortable with historic designs.
The phenomenon
Now everything is poised to change. It starts with that unusual camera, with its design and unique products--what's been dubbed the "living picture." You can pre-order them now, and though cagey about actual figures, Ng noted that sales interest has "been just enormous and terrific and in the aggregate wisdom of our team, from a forecasting perspective, has been pretty close to what we were expecting."
Ng says interest has spanned the range from professional photographers to casual clickers to uncommon consumers like forensic photographers and those interested in "scientific imaging." This means thousands of light field cameras will hit the scene quickly, and then very soon you'll start seeing the images crop up in websites and apps and online newspapers. The phenomenon will likely go viral because of its unique powers and futuristic feel.
Right away that's going to change photography, in the same transformative way digital photography has all but killed film photography. Other manufacturers will follow in Lytro's footsteps because, as Ng notes, "across very large parts of the industry, folks in positions of expertise and leadership realize that light fields are the future of imaging." Then it's just a question of "large companies with a large, stable market" to resolve the "classic innovator's dilemma" and dramatically pivot--which will take a longer time, but is probably inevitable.
Lytro set out to deliver the whole package from camera to software to the entire ecosystem of experiencing the photos is that Ng's team "believes the technology requires a transformational product not an incremental product to bring the benefits to the end consumer." That is, Lytro is trying to be the iPod of camera tech.
Future products
We know the megapixel war in conventional digital photography has changed digital cameras yearly, and has moved onto a new front: smartphones. As these cameras get better, and get carried everywhere "they really call into question the relevance of point and shoot cameras," Ng agrees, and "the dedicated camera market is backed into a corner" by the advances in smartphone imaging--even if the state of the art of these new tiny systems is compromised, optically, by their small size. Needless to say, just as Lytro is bringing fresh innovation to the handheld game and has an expansive future, smartphone implementations are "definitely on the roadmap" because as Lytro's research has shown "people are sharing pictures to social networks from mobile phones" as much as they're "passionate about picture taking" with normal units.
Think about the benefits of a Lytro camera in a smartphone (no need for focus, almost instant-on and instant-snap, better low-light performance) and imagine millions of users snapping light field photos and sharing them on Twitter and Facebook, and you can see how transformative it'll be. But the technologies are equally applicable in "smartphones and high-end cameras."
The software future
All of this is still talking about still imagery, and with just a few "special effects." But as Ng points out, "it turns out that in every shot these pictures inherently capture 3-D data" and there are other developments like "parallax" (which means you can move your viewpoint across the photo a little so, in effect, you may see behind obstructions in the foreground) and "new editing facilities that are unique to light field images" that are coming in Lytro's software roadmap and will surpass the powers you have in systems like Photoshop. And, thanks to the physics of the invention, all of these future processing options will "apply to every picture taken with Lytro cameras from day one."
Finally, let's talk video. Exactly the same imaging benefits also apply to videos with these cameras. Ng agrees.
"There's a lot of hard engineering to get video to work, because you have to capture a light field image at every frame--a lot a data. To take advantage of it involves a lot of processing and smarts."
But the "implications for video are terrific," Ng added, "for regular folks trying to shoot video or even for reality TV folks, for that matter, [where] focussing is basically impossible because you don't know who's going to do something interesting next because you don't have a script." Lytro video would instantly solve that problem, and as well as enabling a whole panoply of artistic and special effects that could make bullet-time look like an amateurish effort, the ability to shoot a movie digitally without worrying about focus could save huge amounts of time and money too.
Flash, bang
So this is a new technology that has a grounding in a very well established one but takes it into a wholly new direction. It has a direct development roadmap, as well as the plans to expand into special-purpose products alongside the main product. It may not happen in a snap, but it looks like Lytro will eventually change photos forever.
COMMENTARY: Lytro is a Silicon Valley startup that's building on research carried out by CEO Ren Ng at Stanford, and its promise is simple: With its light field camera hardware and software, it could change photography in an almost unimaginable number of ways--starting with the thing that mostnews sites have picked up on this morning, the lack of a need to focus a photo.
Meanwhile, Lytro's $50 million in startup capital has come from big names like Andreessen Horowitz and Greylock, and its technological team includes a cofounder of Silicon Graphics and the man who was the chief architect for Palm's revolutionary webOS software. So what's the fuss all about?
It's called light field, or plenoptic, photography, and the core thinking behind Lytro is contained neatly in one paper from the original Stanford research--though the basic principle is simple. Normal cameras work in roughly the same way your eye does, with a lens at the front that gathers rays of light from the view in front of it, and focuses them through an aperture onto a sensor (the silicon in your DSLR or the retina in your eye). To focus your eye or a traditional camera you adjust the lens in different ways to capture light rays from different parts of the scene and throw it onto the sensor. Easy. This does have a number of side effects, including the need to focus on one thing. This adds complexity, and, if used well, beauty to a photo.
The science behind the Lytro camera is explained in the following video:
The secret to how the lytro light field camera works is locked inside that elegant, simple and minimalist box:
Lytro light field cameras come in three colors and can be pre-ordered online HERE:
Red Hot: 750 pictures - 16 GB - $499
Graphite: 350 pictures - 8 GB - $399
Electric Blue: 350 pictures - 8GB - $399
The Lytro light field camera is a beauty and disrupts regular digital photography as we know it. I'm not quite ready to call the new Lytro camera revolutionary or disruptive. It's not the most beautiful looking camera, but it is a fun, user-friendly camera that is going to be popular with amateur photographers because it eliminates focusing the camera. Just point-and-shoot. Simple. You upload the pics to your PC and do the focusing later. Why didn't anybody else think of that?
Commercial photographers need all of this and then some. The website does not explain what the companies future plans are for meeting the serious photographic needs of the professional photographer: time lapse, freeze frame, video, flash, B&W and distance photography.
Courtesy of an article dated October 31, 2011 appearing in Fast Company and an article dated June 22, 2011 appearing in Fast Company
When I saw this article I knew I had to post the pics for you. You can view the rest of the photos HERE. Marilyn Monroe was so hot looking, and there will never another like her.
Justin Jensen's Kickstarter-funded CineSkates approximates De Palma–style dolly maneuvers.
What's the difference between a compelling, immersive short film and a throwaway piece of sub-viral crap? Not video resolution: nowadays, when every cat-filming schmoe has an HD video camera in his smartphone and the DSLR revolution has unleashed a tsunami of bokeh onto Vimeo.com, you need more than sharp pictures and good lenses to make an impact. According to Justin Jensen, an engineer-cum-amateur photographer/filmmaker who studied Computational Photography at the MIT Media Lab,
"I realized that video quality is no longer limited by resolution, but instead by stabilization."
In other words, you need smooth camera movement to give your vids that juicy sheen of production value. But since Jensen couldn't afford a SteadiCam, he invented CineSkates: a tiny, inexpensive, portable, expandable camera-movement platform based on Joby's Gorillapod.
Jensen sought funding for his invention on Kickstarter, and apparently he struck a nerve with other indie filmmakers, because he exceeded his $20,000 ask in just one day. Since then he's collected nearly $300,000--all for a homely looking little tripod on skateboard wheels. But CineSkates' lo-fi look belies its ingenious design: As the video above shows, the Gorillapod's flexible legs let a cinematographer bend the CineSkates into configurations that can execute sophisticated-looking camera moves simply by nudging the rig with a finger and letting inertia and gravity take over. (And besides, even Hollywood pros use dollies outfitted with skateboard wheels.)
Click Image To Enlarge
Click Image To Enlarge
The CineSkates platform is small enough to fit into a backpack, and Jensen has designed an expandable interface called Cinetics Connect into the legs, which will let filmmakers attach awesome-sounding (but as yet unreleased) accessories like robot-controlled wheels to their camera rigs. Jensen tells Co.Design.
"We're working on new clamps that will enable CineSkates to work with other tripods. We're working to have a complete system for filmmakers that will all fit in one bag."
As a filmmaker myself who has schlepped my share of annoyingly heavy equipment cases on location, I can't wait.
COMMENTARY: The CineSkates riticulating stabilizing tripod for video cameras is going to be a hit with both amateur and professional filmmakers. I took the following off of CineSkates pitch on Kickstarter.
What Are CineSkates?
CineSkates™ are a set of three wheels that quickly attach to a tripod and enable fluid, rolling video in an ultra-portable package. Watch the video to see what they are capable of... CineSkates were used to film it!
CineSkates work specifically with the GorillaPod Focus tripod. A ballhead is also required... most small, strong ballheads will work great. Fortunately, JOBY, the maker of GorillaPod Focus and BallHead X has agreed to include their products in the CineSkates System at a generous price!
What Makes CineSkates So Cool?
CineSkates can produce shots that have previously been impossible or only possible with bulky and expensive equipment. Here are a few:
Arcing shots that rotate around objects
Sliding shots that push or pull the subject into focus
Rolling shots that glide over the subject
Time-lapse shots that move the camera slowly and smoothly
Panning shots that scan a wide area
"Worm's eye view" shots that slide just above the floor
Filmmakers will find CineSkates perfect for weddings, music videos, product demos, real estate ads, web videos, corporate films, and even for narrative and documentary films.
CineSkates Are Patent Pending
CineSkates have been patent pending since shortly after being invented in early 2011. The patent also covers other systems and methods for adapting camera mounting devices. Cinetics reserves the rights to the Cinetics and CineSkates names and the Cinetics logo.
When will rewards ship?
Shipping information will be collected after the Kickstarter project ends on October 14th, and rewards will ship the following week. Want them earlier? Pledge at the Limited level and get one of the 100 CineSkates Systems that will be sent out in mid-September.
I can see why he raised $300,000 through Kickstarter. Another winner!!
Courtesy of an article dated October 3, 2011 appearing inFast Company Design
As Internet giants Facebook Inc. and Google Inc. race to expand their facial-recognition abilities, new research shows how powerful, and potentially detrimental to privacy, these tools have become.
Armed with nothing but a snapshot, researchers at Carnegie Mellon University in Pittsburgh successfully identified about one-third of the people they tested, using a powerful facial-recognition technology recently acquired by Google.
Prof. Alessandro Acquisti, the study's author, also found that about 27% of the time, using data gleaned from Facebook profiles of the subjects he identified, he could correctly predict the first five digits of their Social Security numbers.
The research demonstrates the potentially intrusive power of a facial-recognition technology, when combined with publicly available personal data. The study was funded largely by a grant from the National Science Foundation, with smaller sums from Carnegie Mellon and the U.S. Army.
Paul Ohm, a law professor at University of Colorado Law School, who has read Prof. Acquisti's paper, said it shows how easy it is becoming to "re-identify" people from bits of supposedly anonymous information. He said.
"This paper really establishes that re-identification is much easier than experts think it's going to be."
For his study, Prof. Acquisti used a webcam to take pictures of student volunteers, then used off-the-shelf facial-recognition software to match the students' faces with those in publicly available Facebook photos. He said.
"We call it the democratization of surveillance."
The professor said the study also shows how Facebook, with its 750 million users, whose names and profile photos are automatically public, is becoming a de facto identity-verification service.
A Facebook spokesman said that Facebook profiles don't always contain pictures of people's faces. He said.
"Users can choose whether to upload a profile picture, what that picture is of, when to delete that picture."
Google Chairman Eric Schmidt discussed his concerns about Facebook at the D: All Things Digital conference in June.
Facebook is "the first generally available way of disambiguating identity," he said. "Historically, on the Internet such a fundamental service wouldn't be owned by a single company. …I think the industry would benefit from an alternative to that."
Google has been racing to create a rival social-networking service. In June, it launched Google+ to compete with Facebook. In July, Google acquired Pittsburgh Pattern Recognition, or PittPatt, the facial-recognition technology that was used in the Carnegie Mellon study.
Facebook rolled out its facial-recognition service world-wide in June. The service lets people automatically identify photos of their friends. Facebook users who don't want to be automatically identified in photos must change their privacy settings.
A Google spokesman said the company won't introduce facial-recognition technology "to our apps or product features" without putting strong privacy protections in place. At the D conference, Mr. Schmidt said Google had withdrawn a facial-recognition service for mobile phones that it considered too intrusive.
The race to acquire facial-recognition technology reflects the technology's sharp improvement in recent years. The number of matching photos that were incorrectly rejected by state-of-the-art recognition technology declined to 0.29% in 2010 from 79% in 1993, according to a study by the National Institutes of Standards and Technology.
Peter N. Belhumeur, professor of computer science at Columbia University said.
"It's certainly not science fiction anymore."
One big reason for the leap forward: the wide availability of photos that people have uploaded to the Internet through social-networking sites. Previously, publicly available pictures of individuals were mostly limited to driver's-license photos, school portraits or criminal mug shots, all of which were difficult to obtain.
In the Carnegie Mellon study, 93 students agreed to be photographed using a web camera attached to a laptop. The shots were immediately uploaded to a cloud computer and compared with a database of 261,262 publicly available photos downloaded from Carnegie Mellon students' Facebook profiles.
In less than three seconds, the system found 10 possible matching photos in the Facebook database. The students confirmed their face was among the top results more than 30% of the time.
Prof. Acquisti said.
"The research suggests that the identity of about one-third of subjects walking by the campus building may be inferred in a few seconds combining social-network data, cloud computing and an inexpensive webcam."
He then tried to discover whether he could predict sensitive information from the Facebook profile of individuals he had identified. He exploited the fact that, after 1987, the Social Security Administration started assigning Social Security numbers in a way that inadvertently made it easier to predict them based on the person's birthdate.
Drawing from knowledge of the Social Security numbering system used in a previous experiment, Prof. Acquisti was able to predict the first five digits of the subject's nine-digit Social Security numbers 27% of the time, with just four attempts.
"The chain of inferences comes from one single piece of anonymous information—somebody's face."
The last four digits of the number also are predictable: In a 2009 paper, Prof. Acquisti showed that he could predict an entire Social Security number with fewer than 1,000 attempts for close to 10% of people born after 1988.
In June, the Social Security agency launched a new "randomized" numbering system, which will make such predictions more difficult for future generations. An agency spokesman said that even under the old system "there is no foolproof method for predicting a person's Social Security number."
As a demonstration of his latest project, Prof. Acquisti also built a mobile-phone app that takes pictures of people and overlays on the picture a prediction of the subject's name and Social Security number. He said he won't release the app, but that he wanted to showcase the power of the data that can be generated from a single photo.
COMMENTARY: I get the shivers just knowing that anybody with a smartphone camera and face recognition software can take my picture, and quickly find out who I am, then be able to research the internet or image databases for an abundance of personal and private information about me--instantly.
In a blog post dated July 31, 2011, I profiled Social Intelligence Corporation, a Santa Barbara-based company that conducts social media background checks for employers. Social Intelligence uses special software that searches social network sites like Facebook, Twitter and LinkedIn and other internet sites for everything we ever posted online such as wall posts, tweets, images, music and video files. Social Intelligence then analyzes our online posts and content for any negative or derogatory content or comments we made (bad mouthing former employers, political views, expletives, etc.) then furnishes an employer a social media consumer report that employers then use in their hiring decision-making. This supposedly weeds out "undesirables".
Recruiters and employers are now using criminal background checks, consumer credit reports, and social media consumer reports to determine if job applicants are hireable. Scary isn't it? It's a miracle people are able to get a job.
Unfortunately, recruiters and employers use this information to make pre-judgements about individuals, and if we don't pass the "social media sniff test", we're unhireable, and according to the FTC (which approved Social Intelligence Corporation's technology), there's not a damn thing we can do about it. Yet.
Face recognition software presents an even scarier intrusion into our private lives because complete strangers can findout who we are, including the most intimate details of our private lives, and we would never know.
Even if we take precautions to post our pictures online or use pseudonames instead of our real names, if someone, even a friend posts a picture of us online, and just happens to include our name (e.g. "This is me and my friend Mary Picolino"), the whole world will know who we are in an instant.
If you've been following my posts, you know I am an advocate of both online and offline privacy, and believe that misuse of these intrusive technologies are now reaching the danger point, and something must be done.
If Google believes iin its mantra of "Never Do Evil", then I hope they will recognize just how intrusive PittPatt's face recognition technology is, and put it away, store it somewhere in a drawer, and never make it available to the public.
I can understand the use of facial recognition for security purposes, such as permitting access to high security establishments CIA, NSA,. military bases, records and files, but to just provide it to anyone is damn dangerous.
I was going to post a list of facial recognition applications software, but I have decided not to promote or encourage the use of this type of software, and swear by the allmighty Gods, never to use facial recognition software. Our deepest fear has finally arrived--the Era of Big Brother.
When we first previewed Instagram nine months ago, most of the initial comments predicted it would be dead on arrival. To say those people were wrong is a vast understatement. And Instagram now has five million ways to prove it.
Yes, Instagram now has five million users. That's 625,000 users for every month they've been in existence with the growth accelerating. Just this past weekend they added 100,000 new users, for example. Even more amazing, there are now 1.25 million users for every one employee of Instagram.
I got a chance to catch up with Instagram co-founder Kevin Systrom this morning to talk about the milestone and the bigger picture for the service. Beyond the five million user mark (which they actually hit yesterday), Instagram is about to hit another huge milestone: 100 million photos. They're at 95 million right now, and they're adding roughly 860,000 a day. In other words, by the end of this week, the total number of pictures should cross 100 million.
For comparisons sake, it took Flickr two years to hit 100 million photos. Again, Instagram, just eight months. If you still had any doubts that a mobile photo revolution is happening, there you go.
As for the burgeoning Instagram ecosystem, Systrom says that there are now 2,500 unique apps out there accessing their APIs. Remarkably, they are also seeing some 350,000 connections across their API, meaning that some of the apps connected are massively popular. Which are the most popular? Webstagram and Flipboard were the top two the last time he looked, Systrom says. There are also now applications pushing photos into Instagram — not through the API, but through more creative means.
The emergence of Webstagram, which is a web-based viewer for Instagram photos, leads to the question of when Instagram might finally release their own web app? Systrom declined to comment on that, but did confirm that work continues in that area. As for the all-important Android question, same deal — nothing to share yet, but work continues. Systrom will say that the top priorities right now are to scale the service, scale the team, and improve the core parts of the existing iPhone app.
One of the most remarkable things about Instagram is that they've achieved such success while only being on one platform: iOS. There is no way to sign up on the web. No way to sign up on Android. They’re currently a mainstay in the top social networking apps list in the App Store. And that’s big because they’re not spending anything on marketing, and Apple has only promoted them a few times. In other words, the growth and traction has been largely organic.
Instagram has scored some deals with partners to help promote the app. But in terms of bringing in revenue, We're much more interested at growing the ecosystem right now,” Systrom says. And they have plenty of money in the bank from nice funding round this past February to continue growing for some time.
He also says that they have a lot more work to do on the current iPhone app. “Lots of very cool new stuff coming soon,” is all he’ll vaguely say. Though I did get him to admit that yes, more filters, are in the works. He also says there will be some “fundamental shifts in the underlying technology,” coming soon.
Systrom says,
“We want to give people the tools to tell the story of their lives in a visual way — we’re working hard on making those tools top-notch.”
Giving their size and the rate at which they’re growing, Instagram clearly has a lot of competitors gunning for them. So far, most have failed to gain any meaningful traction. But Twitter just recently put themselves in the photo-sharing game in partnership with Photobucket. Given that Twitter is such an important social discovery mechanism for Instagram, does this worry Systrom?
“I’m excited to see how a more first-class experience of photos on Twitter will allow people to have a better Instagram experience within Twitter.”
In other words, he think the rising tide will boost all boats, including his.
There are also a number of apps popping up that are attempting to be the “Instagram of video”. That’s interesting since Instagram does not currently support the sharing of videos — might they move in that direction? Systrom notes,
“I still think it’s early — mobile video will always be slower to download and consume than photos. Instagram is about fast, beautiful experiences. Short snippets of friends’ lives. video is something that I think fits naturally into our roadmap — just not at the moment”
Earlier, I alluded to the fact (with math!) that despite their size, Instagram still has only four employees. That’s insane. Systrom says,
“Hiring great people is a top priority for me right now. We clearly have something special, and we want to make sure to have the best of the best to help us to the opportunity."
But they’re not going to rush.
“The thing we don’t want to do is to hire just because we’re big. Building a company is about building a product, but it’s also about building a team. They’re both very important to us."
Given that Instagram is still iOS-only, surely they must have some thoughts about the just-announced iOS 5. “iOS5 provides some really awesome new tools for Instagram users. Twitter integration makes it easier than ever for users to share their photos with their followers,” Systrom says. Since they have no need for DM access, Instagram should be one of the key apps helped by the new, deep iOS Twitter integration.
When I pointed out that I saw Instagram make a few appearances on stage during the keynote (in the background in demo images), this clearly made Systrom happy.
“It was awesome to see Instagram on stage behind Steve during the keynote. It’s humbling to think that we only started 8 mos ago and Instagram is now part of the de-facto set of apps that people use on the iPhone.”
It’s pretty well known as this point that Apple executive Phil Schiller is a big time user of Instagram. But we’ve heard other Apple executives are hooked on the service too — though more under the radar. Systrom says,
“It’s not surprising that new notification system in iOS5 demos featured Instagram — we send over 10 million Push Notifications per day. And I think having a home for all those pushes to be out of the way and usefully grouped makes total sense.”
As for Apple’s new Photostream feature (which shares pictures you take on your devices automatically with your other devices over iCloud), he continues.
“Photostream is really awesome. I think there was a big focus on unity between your Apple devices this year. So it totally makes sense for photos to sync between devices. I’d imagine photos you take with Instagram will get sync’d as well, but I’m unaware of exactly how it works.”
Assuming that Instagram’s huge growth keeps up, they could very well hit 6 million users before the end of June. And 10 million before the end of the year looks like a shoo-in. And none of that is taking into account the possibility of an Android app before the end of the year. Let’s just hope Instagram finds a fifth employee before then.
COMMENTARY: I wonder if Congressman Weiner used Instagram for his penis photo. That's a joke, of course.
I can understand why TechCrunch called Instagram "dead on arrival". The photo sharing space is so damn crowded. Photo sharing is very popular. I just check the iTunes store, and there are over 250 photo sharing apps. I would imagine that there are an equal number of Android apps. That's a lot of competition. In order for Instagram to gain more traction, it needs to introduce an Android photo sharing app as quickly as possible.
The key reasons why Instagram has been able to achieve success is because it has a very elegant, user-friendly, minimalist and a social networking flavor because of its connectivity with Facebook, Twitter and other sites. It differentiates itself by offering built-in photo filtering features that gave digital photos a nice look.
Systrom says he is more interested in increasing users than he is generating revenues. but 5 million users is nothing to sneeze at. The fact that he is offering the Instagram app for FREE is an added plus. The Apple iTunes store had plenty of photo sharing apps priced at $0.99 to $2.99. That's another big plus. Once again this proves that FREE works.
Courtesy of an article dated June 13, 2011 appearing in TechCrunch
No details were available about what that would look like or how it would work--and the company declined to comment. But it's possible the feature would give users similar capabilities to those they currently enjoy from third-party apps like TwitPic and yfrog. That is: The ability to take a picture with a smartphone and, with the click of a few buttons, upload it to the Internet and post it to Twitter.
It would be the latest in a string of actions taken to bring popular Twitter functions to Twitter itself--by acquisition or by developing them itself--rather than leaving the features to third-party app developers.
Twitter’s leaders have talked plenty about about third-party apps. They can extend Twitter's capabilities, but they can also turn off the more casual, less tech-savvy user. And Twitter needs to recruit those users to grow. While Facebook has over half a billion users, Twitter still only has 175 million. (And Business Insider today suggests the number of active users might be even lower than that.)
Growth is the reason, for example, that the company bought Tweetie, the maker of an iPhone app that let people send and read tweets on their phones. Before the acquisition, as then-CEO Ev Williams explained at an event last year, some users would search the App Store for “Twitter.” And since there wasn’t an app called “Twitter” (Tweetie was called “Tweetie”), they didn’t find one and simply assumed one didn’t exist and thus didn't think they could use Twitter from their phone. Buying Tweetie allowed the company to rebrand the app so new customers could find it--and use it.
It’s this same thinking that prompted the company to launch a redesign of its interface last fall--to help new users more quickly discover how to use the service. And it’s why the company redesigned its homepage earlier this year--to better understand how to use the service.
Back in March, director of platform Ryan Sarver, who’s responsible helping outside companies build applications on top of Twitter, told developers that making the service easier for new users was one of the company’s top priorities. And he talked about how the experience of using different apps, that worked in different ways, sometimes slowed them down.
Ryan Sarver wrote in a Twitter forum for developers,
“Our user research shows that consumers continue to be confused by the different ways that a fractured landscape of third-party Twitter clients display tweets and let users interact with core Twitter functions."
He also noted that 90% of Twitter users “use official Twitter apps on a monthly basis.”
All of which leads to this week's expected photo-sharing announcement. Twitter users like to type status updates, but they also like to share photos. If it’s too hard for many of them to figure out how to use a TwitPic or yfrog--or even to realize that they exist--then it makes perfect sense for Twitter to do what it needs to do to improve that experience. And that just might mean bringing those features in-house.
COMMENTARY: Adding photo sharing is no big thing. I like the third-party apps. Use Twitpics.com all the time. At this point adding photo sharing does increase the overall experience or increase the Twitter value proposition. I don't like small incremental, me-too improvements. They need to take the Twitter experience to the next level. If Twitter decides to cannibalize its third-party vendors by either adding its own app features, then it needs to change its business model from open to closed.
Courtesy of an article dated May 31, 2011 appearing in Fast Company
Recent Comments