Disruptive Innovation

Catherine Mohr: Medical Research, Technology and Innovation

She calls herself “a tinkerer at heart.” And ever since Catherine Mohr walked into a Boston-area bike shop looking for a high school job repairing drive trains and spokes, the New Zealand-born surgeon and inventor has taken tinkering to a mind-boggling high art here in Silicon Valley.

Dr. Catherine Mohr is the Director of Medical Research at Intuitive Surgical, the global technology leader in robotic-assisted minimally invasive surgery. In this role, she evaluates new technologies for incorporation into the next generation of surgical robots. In addition, she is a Consulting Assistant Professor in the department of Surgery at Stanford School of Medicine where she works in the development of simulation-based curriculum for teaching clinical skills. She is also a Medicine Faculty at Singularity University and an Advisor in the Future of Health Systems Working Group of the World Economic Forum.

Dr. Mohr received her BS and MS in mechanical engineering from MIT, and her MD from Stanford University School of Medicine. During her initial training as a mechanical engineer at MIT’s AI Laboratory, Mohr developed compliant robotic hands designed to work in unstructured and dynamic environments. Later, while pursuing an MD degree at Stanford, she identified needs for new laparoscopic surgical instruments and collaborated to develop the first totally robotic roux-en-Y gastric bypass, and invented and then started a company to commercialize the “LapCap” device for safely establishing pneumoperitoneum.

She has been involved with numerous startup companies in the areas of alternative energy transportation, and worked for many years developing high altitude aircraft and high efficiency fuel cell power systems, computer aided design software, and medical devices.  She spoke twice at TED Conference. At her TED2009 Talk, she tours the history of surgery, then demos some of the newest tools for surgery through tiny incisions, performed using nimble robot hands. At her TED2010 Talk, she walks through all the geeky decisions she made when building a green new house — looking at real energy numbers, not hype.

To learn more about her works, please visit her official website.

The following is an interview with Dr. Catherine Mohr about Medical Technology, Innovation and Creating a Better World. The interview has been edited for brevity.

Niaz: Dear Catherine, I really appreciate you taking time to join us at eTalks. I am thrilled to have you.

Catherine: Thank you for the invitation, it is great to be here.

Niaz: You are the Vice President of Medical Research at Intuitive Surgical, where you develop new surgical procedures and evaluate new technologies for improving surgical outcomes. You have profound experience and a body of great works in the field of Medical and Disruptive technology. In addition to that you’re very passionate about the futures in science, technology, engineering and mathematics. At the beginning of our interview, please tell us a little about your background and how did you get started?

Catherine: I am originally from New Zealand and grew up in Boston. Although, you can’t infer either of those facts from my accent. I always knew that I wanted to be a scientist, but my path to medicine wasn’t typical. As an undergraduate, I majored in Mechanical Engineering and built and raced solar cars as part of MIT’s team. That led me to working in alternative energy with Paul MacCready at AeroVironment working on hybrid electric cars and fuel cells. It was a wonderful time, and I remain very committed to sustainable technologies – encouraging kids at every opportunity to consider careers in science and engineering.

Niaz: Tell us about the road that led you to the world of robotic surgery. It was not a straight path, it seems.

Catherine: It wasn’t until after many years of working as an engineer that I went to medical school. I was in my 30s, and hardly the typical medical student. In many ways, I ended up in medicine because I was very interested in getting back onto the steep part of the learning curve. I loved engineering, but I had become an engineering manager, and I was looking for a new challenge.

In medical school, I was doing a lot of research in surgery and surgical technologies as part of my schooling. I encountered the da Vinci Surgical System and I started doing procedure development with one of my attending surgeons. We both work for Intuitive Surgical now – she as the Chief Medical Officer, and I am the VP of Medical Research.

Niaz: Intuitive Surgical is a high technology surgical robotics company that makes a robotic surgical system. Today, Intuitive Surgical is the global leader in the rapidly emerging field of robotic-assisted minimally invasive surgery. We would like to learn more about Intuitive Surgical. Can you please tell us about Intuitive Surgical, its current projects and also how it has been innovating our future?

Catherine: The flagship product at Intuitive Surgical is the da Vinci Surgical System. It allows a surgeon to operate with full dexterity and capability, but through tiny incisions. The da Vinci System has been a major part of the increase in the rates of minimally invasive surgery in many types of procedures where surgeries were too complex, intricate or just too fatiguing. As of early this year, we estimate that there have been two million procedures done worldwide with the da Vinci System.

Current research and development projects at Intuitive Surgical are aimed at increasing the capabilities and decision making resources of the surgeon while continuing to decrease the invasiveness of surgical therapies. The goal is always working toward better surgeries that are less invasive.

Niaz: The da Vinci Surgical System is a sophisticated robotic platform designed to expand the surgeon’s capabilities and offer a state-of-the-art minimally invasive option for major surgery. It has been using all disruptive technologies like robotics, high- definition 3D camera and so on. Please tell us what is the da Vinci Surgical System and how does it work?Catherine: Although it is often referred to as a “robot”, a more appropriate description would really be “telemanipulator,” as it doesn’t make any autonomous decisions of its own. To operate the da Vinci System, the surgeon sits at a console which has both a 3D display and a pair of input devices, which capture the motions of the surgeon’s hands and the da Vinci System moves the surgical instruments in a precise, scaled replica of the motions that the surgeon is making. This is coupled with a 3D camera so that the surgeon sees the instruments in the display superimposed over where they feel their hands to be.

Sitting down at the console, moving these input devices, and seeing the instruments move exactly the same way is the “intuitive” part of the process.

Niaz: How is robotic surgery, using something like the da Vinci system, better than the old-fashioned way with human hands?

Catherine: The human hand is rather large – at least when you are thinking about making an incision in the body large enough to fit that hand through. The da Vinci instruments are only 8mm in diameter, so they allow you to bring all the capability of that human hand into the body, but through a small incision. This is much better for the patients, as they get the same operation inside, but they heal more quickly with less pain.

Niaz: If we look at the evolution of surgery, we can see really huge changes have happened since last the two decades. With the rapid acceleration in human-machine interaction, the potentiality of robotics in surgery is going to be very vast. How can innovations like robotic-assisted surgery change the world of surgery?

Catherine: The changes haven’t only been happening on the surgical side. The improvements in surgery will come partly from synergies with advances in other parts of medicine. Some of the most exciting things that I have seen have been improvements in diagnostics and screening. As we find cancer earlier and earlier when it is easily cured surgically, we won’t have to do huge reconstructive operations to restore the function that would have been lost by cutting out the larger tumor. This gives us the opportunity to further reduce the invasiveness of our surgical therapies by moving to even smaller incisions, or going in through the mouth and avoiding external incisions entirely.

Niaz: What do you see as the future of robotic surgery? What are our core challenges to reach to that future?

Catherine: As we look at reducing invasiveness, we always want to be able to build things smaller while maintaining strength and precision. Interestingly enough, some of the biggest advances in robotics may come from new material science and machine tools.

Niaz: As an expert in the fields of robotic surgery and sustainable technologies, you’re passionate about realizing the potential benefit that appropriately applied technologies can have in our society, and inspiring the next generation of scientists and entrepreneurs to tackle the world’s important problems. Can you please tell us about some interesting and tough technological problems that you want next generation of entrepreneurs to solve?

Catherine: Apart from the new materials, many of the opportunities to do extremely small interventions will rely upon being able to navigate within the body – like having a GPS for the body. Today, we can map the body with things like CT or MRI imaging, however, the body does not stay static. Organs move constantly, which makes navigating with a preoperative image like trying to follow a GPS map while the roads are constantly changing and moving, but your map never updates. Solving these problems would make it easier to make surgery even less invasive.

Niaz: As you know, it’s really hard to do scientific breakthroughs, to build companies like Apple, Google, Space X, and Tesla, to do something in massive levels with truly disruptive technology. I would like to hear your ideas on doing breakthroughs, coming up with authentic disruptive innovation and on building next big organization?

Catherine: It is solving problems that matter that is the key to these disruptive companies. The problems that matter also tend to be hard, so you need to be patient, and dig deep into the technology to get to solutions. None of the companies you mention are short on ambition, they all started fairly small, and they are deep experts in their technologies.

Niaz: Do you believe Silicon Valley is still the best place to build next big technology company?

Catherine: It is the best place because its historical success has led to the intense concentration of tech talent. However, the shortage of housing and the resultant astronomical housing prices make attracting people to come to Silicon Valley who aren’t already here rather difficult.

Niaz: What does actually make Silicon Valley very special?

Catherine: Critical mass. The concentration of talent, and the expectation that you will fail a bit before you succeed continues to attract the ambitious with big ideas. People cycle through startups gaining experience, and they keep going until they do succeed.

Niaz: You’re a medical technology pioneer, a mechanical engineer, and an expert in robotic surgery. Prior to going to medical school, you worked in the field of alternative energy transportation and sustainable technologies, working for many years with Dr. Paul MacCready at AeroVironment developing alternate energy vehicles, high-altitude aircraft, and high-efficiency fuel cell power systems aimed at reducing our world’s energy consumption and emissions. Can you tell us about how do you connect all of your skills, expertise, ideas and knowledge to break through the threshold in any specific field to get the best out of it or build the big things?

Catherine: Much of what I do involves understanding how the problems we are trying to solve are part of large interconnected systems, and thinking about optimization across the entire system. Optimizing only one part of the solution at the expense of the other important parts is counter-productive. For example, maximizing energy storage without considering weight for an airplane, or improving surgical capability without making it easy enough to operate safely. The big interconnected problems I like to tackle involve many of the same skill sets, even if they are in far flung areas like sustainable energy and surgery.

Niaz: How beneficial is it to have a multi-dimensional background and expertise?

Catherine: Attempting to solve all of these big programs are always team efforts. The myth of the lone inventor is just that – a myth. You need huge diversity of skills on a team, but that very strength means that teams often have difficulty communicating, if the background and experiences of the team members are too different. The people who have experience, background and training in several fields act as the linkers and translators within teams. I like to joke that I am “trilingual” – I speak Geek Speak, Medical Jargon and English – three mutually unintelligible languages. Being able to explain the clinical to the technical and the technical to the clinical is a valuable role.

Niaz: As far as I know you hold several patents. Please tell us about your patents?

Catherine: Most of these are in the area of manipulation or vision on the da Vinci System. You’ll notice that few, if any, of those patents list me as the sole inventor. Invention tends to come when you are solving a new problem with a team, and have the opportunity to try novel solutions. The best ideas are also often hybrids of many people building upon and improving each other’s ideas as you solve a problem together. Patents certainly serve a purpose in that they give you a period of time in which to use an idea before a competitor can legally copy it, but it is the teamwork and problem solving aspect of it that I enjoy the most.

Niaz: What is your favorite part about working at Intuitive Surgical?

Catherine: Getting to remain on the steep part of the learning curve – medicine and technology are changing so rapidly, that keeping up with what is going on is a constant process – one that I enjoy very much.

Niaz: As Vice President of Medical Research, what do you do on a daily basis? What is a normal day like for you?

Catherine: I’m not sure if I really have a normal day. Some days are lab days when we are in the research operating room developing new procedures or testing out prototypes of new instruments. Other days involve traveling around and both speaking about our technology and learning about new technologies from their inventors. And, some days involve trying to look out into the future to see what changes are happening in medicine so that our next products fit the new needs that are arising.

Niaz: What other kinds of projects or initiatives have you been involved in?

Catherine: I started playing the cello recently, and through building our house and blogging about it, I have been active in the conversation about green building and native plant gardening. Recently, I have also started working with GAVI, the vaccine alliance, on technologies for tracking vaccines in developing countries.

Niaz: You wanted to save the world, or at least a piece of it. But you just weren’t sure how to go about it. And now in 2014, we can see your profound body of works that have helped to change the world of robotic surgery and sustainable technologies. I know there are still a lot more to come. What would be your advice for the ones who want to follow your footsteps and change the world to make it a better place to live in?

Catherine: Focus on the problems that matter to you, if it matters to you, it probably matters to other people too. People make the mistake of focusing on what they think other people want, and then their hearts are never really in it. Without passion you won’t have the drive to do all the really hard work that comes with trying to make a difference. People are very impatient for success now, but it will never come unless they take the time to become deeply educated and skilled in the areas needed to be able to make a contribution.

Niaz: Any last comment?

Catherine: The technologies that will probably shape our future careers are in labs somewhere. I expect I will reinvent myself several more times as those technologies come out of the lab and start changing our world.

Niaz: Thanks a lot for joining and sharing us your great ideas, insights and knowledge. We are wishing you very good luck for all of your upcoming great endeavors.

Catherine: Thank you for putting this program together

_  _  _  _  ___  _  _  _  _

Further Reading:

1. Andrew Hessel on Biotechnology, Genetic Engineering and Future of Life Science

2. Aubrey de Grey on Aging and Overcoming Death

3. Irving Wladawsky-Berger on Evolution of Technology and Innovation

4. Gerd Leonhard on Big Data and the Future of Media, Marketing and Technology

5. Viktor Mayer-Schönberger on Big Data Revolution

6. James Kobielus on Big Data, Cognitive Computing and Future of Product

7. danah boyd on Future of Technology and Social Media

8. James Allworth on Disruptive Innovation

9. Brian Keegan on Big Data

10. Ely Kahn on Big Data, Startup and Entrepreneurship

Jon Nathanson: Apple, Disruption, Fire Phone and Content Business

Jon nathanson is a technology and business columnist for Slate. He is also an angel investor and a strategy consultant in San Francisco and Los Angeles.

The following is an interview with Jon Nathanson about Disruptive Innovation, Apple, Amazon’s Fire Phone, Disrupting Hollywood and Future of Content Business. The interview has been edited for brevity:

Niaz: Dear Jon, thank you so much for finding time to join us at eTalks in the midst of your busy schedule. We are thrilled and honored to have you at eTalks.

Jon: Thanks for having me! It’s a pleasure and an honor.

Niaz: You are a technology columnist, startup investor, and strategy consultant in San Francisco and Los Angeles. At the beginning of our interview can you please tell us more about yourself, your works, and current involvements?

Jon: It sounds so corny, right? “Technology columnist, startup investor, and strategy consultant.” Those are some of the things I do every week—but put them together like that, and they don’t amount to a coherent job description. Unfortunately, I’m the one who put them together that way, when I was asked by Slate to give a tagline for my column, “The Bet.”

But let’s unpack the list. I’m a columnist for Slate, and that’s a fairly recent turn of events. I’ve been writing my whole life. I doubt anyone will give me credit for it, but I was the editor-in-chief of my high school paper, which won numerous national awards and was consistently ranked at the top of the nation…for a high-school paper. For whatever that’s worth. (Probably not much.) That was, sadly, the beginning and the pinnacle of my un-storied career in journalism. After graduation, I packed up my proverbial press pass and moved on with my life. But it still called to me. I successfully ignored that call for the first decade of my professional life.

That was up until early last year, when I realized I’d been wasting an unseemly amount of time commenting on Hacker News every day, and I came across a listing on HN for Priceonomics. Priceonomics is a Y Combinator company that started a blog initially as a content marketing effort, but who came to specialize in writing top-quality blog posts. They became so good at it, in fact, that they were regularly charting to the front page of HN, and I was regularly reading their stuff. I saw they were looking for writers, and I applied that instant. Through my work with Priceonomics, I started getting attention from other journalists and media outlets, and I was invited onto NPR a few times. It was very quick and very surreal. Next thing I knew, I had an agent, and soon after that, the gig with Slate. It was one of those cases, as they say, where my “overnight” success was the result of 20 years of preparation. When I was invited up to the big leagues, I’d been practicing my swing for decades. (So you’d think I’d be better at writing job descriptions for myself…)

As for the investing and consulting—those, too, are fairly recent ventures a long time in the making. I’ve been informally advising friends’ startups for years now. And in 2013 I started putting my money where my mouth is, investing at the seed stage in several companies I knew well and believed had a serious shot at success. It’s funny how the angel community works. You invest in a few companies, and next thing you know, more companies and more opportunities are coming your way, all because the founders and co-investors you’ve gone in with are friendly with others. And platforms like AngelList have made the process even more social. Next thing I knew, I was investing or advising enough startups—and devoting a scary amount of my workweek to doing so—that I felt justified in taking a step back, evaluating it, and calling it a significant part-time job. Investing and consulting had earned their fair place on my motley tagline.

Niaz: I would like to start our interview discussing about disruptive innovation. The last few weeks were pretty interesting and there was much discussion for and against disruptive innovation. Jill Lepore, a Harvard professor, has written an extraordinary piece on The New Yorker where she cited disruptive innovation as a myth. Even the father of disruptive innovation, Professor Clayton Christensen, now thinks disruption has become a cliché. You have seen how disrupt, disruptive, disruption and some other buzzword around disruptive innovation have become a common phenomenon in the tech industry. Can you please tell us what do you think about disruptive innovation? How a buzzword or myth or cliché like disruptive innovation is changing the world revolutionary? Or there is something else [like mindset] behind the scene, which is the original reinforcement of these revolutionary changes?

Jon: First of all, I think it’s intellectually—and, dare I say, emotionally—consistent to appreciate Jill Lepore’s article and to maintain a healthy respect for Christensen’s thesis. People will say that Lepore has chipped away at the very foundations of Christensen’s theories of disruptive innovation. I don’t necessarily agree. The analogy I’d use is that she’s shone a very bright light on it. She’s walked down into the basement of the building, and she’s lit a floodlight on everything there, exposing the cracks, the structural weaknesses, and the clutter. But the building itself is still (mostly) sound.

It helps to frame Christensen’s original thesis in context of the intellectual climate of his day. “Disruptive innovation,” as Christensen originally charted it out, was a theory of market competition that sought to expand upon the work of Michael Porter and his “Five Forces” framework. Porter argued that there are five major forces in play in any given market: competitive intensity between existing players; suppliers’ bargaining power; buyers’ bargaining power; the threat of substitute products or services; and the threat of new entrants into the marketplace. Christensen, to put it in physics-geek terms, sought to unify two of the five forces: the threat of new entrants, and the threat of substitution.

“Disruptive innovation” occurs, in Christense’s framework, when less-than-perfect substitutes arise for existing products, capitalizing on benefits (in solution, in cost, or in feature set) that the current players in the market either don’t think are important, or think are inferior. Christensen argued that new entrants—startups, as we now call them—are usually the bearers of the substitute products, because they have no legacy supply chains, cost structures, or customer requirements to satisfy. And he argued, in a Schumpeterian sense, that these new entrants would usually, or even inevitably, “disrupt” the existing market and unseat the established players.

Lepore’s research disputes the second of those premises, but not the first. She showed that new entrants tend not to survive the shakeup. Their function is usually catalytic. They enter a market, stir the pot, and get acquired or driven out by the legacy players once the legacy players catch up. But shakeups can and do happen, and they often play out in the dynamic that Christensen outlined in The Innovator’s Dilemma.

So it appears that Christensen was largely right about the dynamics of disruption, but less right about the outcome of disruption, or about the inevitability of its winners and losers. He raised valid and provocative ideas. But his project for the unification of two forces—new entrants and substitution—was not entirely successful.

That said, I’d still recommend The Innovator’s Dilemma as mandatory reading in any core business school curriculum or strategy class. Readers should simply place it in context. Darwin’s On the Origin of Species was foundational in outlining the theory of evolution by natural selection—but it’s a very old text these days, and it got some things wrong, and others have come along and corrected or expanded upon them. Those corrections, and those amendments, do not invalidate the importance of Charles Darwin to the field of evolutionary biology. Similarly, modern challenges and updates to Christensen’s work don’t necessarily invalidate the significance of his work.

This isn’t a baby we should throw out with the bathwater. As for the cult of “disruption” that has sprung up around Christensen’s work over the last few decades: that’s a different story. Disruption, in and of itself, shouldn’t be the driving goal of any given startup. Innovation is the goal. Disruption is the means to the end. And not all kinds of innovation are necessarily “disruptive.” Even the big kinds.

If founders and thinkers take away one thing from Lepore’s challenge to Christensen’s work, it should be that disruptive innovation is a theory. It is not the only theory people need to know, and it is neither universally applicable nor wholly actionable. The innovator’s Dilemma deserves a place on you bookshelf, but it shouldn’t be the only book there.

Niaz: Folks have long been waiting for the disruption of Hollywood. But Hollywood has been out of touch from the massive disruption for years. You have an interesting column on Slate, Why Hollywood Resists Disruption, where you compare the likeness of Hollywood to the Roman Empire, particularly that the Roman Empire did not actually fall but instead divided and dispersed. Can you please briefly tell us about Why Hollywood Resists Disruption? Do you feel your opinion is influenced by your experience at NBC and 20th Century Fox? What will be the outcome of massive disruption of Hollywood?

Jon: The analogy to the Roman Empire was a colorful and nerdy one, no doubt spurred by my inability, after all these years, to stop playing Rome: Total War or watching movies like Gladiator. But the analogy is this: Rome was a remarkably adaptable political organism. It was constantly shifting its boundaries, incorporating its former enemies, and bringing them into the fold. By the end of the Empire, Rome was so thoroughly, demographically changed that a “barbarian” of Germanic bloodline was leading its army against Germanic barbarians at its gates. Hollywood is similar in that respect: companies like Netflix have disrupted and shifted the borderlands, so to speak. Distribution of movies and TV shows and music is wildly different now, and none of it to Hollywood’s real benefit. But Hollywood has maintained control over talent, over means of production, over storymaking-to-filmmaking process—and has maintained an indispensable role in the process of creating and distributing entertainment to the masses. More and more people get their shows through Netflix, but Netflix’s shows are still made by Hollywood studios and Hollywood production companies, at Hollywood prices.

Here’s where things will get interesting: Hollywood owns very few of the the “last miles” in any of its consumer pipelines right now. Movie studios don’t own the major theater chains, at least in this country. They don’t own the customer relationships at iTunes, Amazon, Netflix, or XBox Live. TV networks still have a direct pipeline to viewers, but that pipeline is eroding or obscuring—fewer and fewer people watch their network programming on the networks themselves, at the appointed days, dates, and times.

And so Hollywood is at a crossroads. Should it abandon the fight for last-mile distribution, and focus entirely on creating and licensing content? If so, a lot of very big, very consolidated media companies are going to need to do some major restructuring. Should it keep up the fight for relevance in distribution? If so, studios or production companies will need to build a credible alternative to Netflix, iTunes, etc. HBO Go is a very interesting example, and I think its success will be a bellwether for the next few years. Already we’re seeing just about every network under the sun releasing its own “HBO Go” app. And consumers seem to be fine with that—an app for every network. But they’ll be fine with it up to a point. A future in which every network has its own app necessarily means that every consumer needs to keep track of which shows belong to which networks, and can be found on which apps. That’s a high cognitive load to bear, and it’s a consumer-unfriendly burden to impose. Consumers love convenience, and Netflix is very convenient. I don’t think an ecosystem of 20 different HBO Go-alikes is a viable, consumer-preferred alternative to Netflix. But maybe a handful of apps are. Apps differentiated by genre. Or subscription streams based on dynamics the major players aren’t thinking about today, like group subscriptions, or customizable subscriptions for only the shows you want, and not the stuff you don’t want.

I spent many years in Hollywood, working on primetime shows at NBC, Fox, and elsewhere. I think my time there gave me a deep appreciation for just how hard it will be to disrupt Hollywood, and at the same time, just how much disruption probably should take place. It’s a paradox, and to circle back to your earlier question, I wonder whether Christensen’s framework gives us any guidance as to how this will play out. Christensen’s work might argue that YouTube and Vine are changing the nature of entertainment content, and that inevitably, full-length, TV-style shows will fall to the wayside. And yet that’s not entirely true. Teenagers are probably watching YouTube and Vine to the exclusion of more and more TV-style programming. And yet, uber-premium TV programming like Game of Thrones and Breaking Bad are more relevant than ever before. Perhaps the middle is falling out this time, and we’ll live in a world with supergood content and superdisposable content. Nothing in between.

Niaz: You’ve spent a lot of time in the content business. We are now living in an exciting era of content creation, curation and distribution, where there is a popular belief that ‘content is king’. From hardcore tech companies to venture capital firms to social media companies to marketing companies to media companies …. everyone is actually into content business. Does that mean if you not doing content, you’re missing something really big?

Jon: “Content marketing” is having a moment right now. Everyone feels that adding something substantial to the conversation is necessary to winning business and maintaining credibility in whatever industry they happen to play in. Witness companies like Google Ventures, who are creating libraries of advice, content, etc., to their arsenals in an attempt to become better full-service providers to portfolio companies. Or companies like Priceonomics, whom I mentioned earlier—research companies that regularly publish accessible, in-depth, top-quality articles for anyone to read, regardless of whether they’ll be users of Priceonomics’s core services.

Some companies will get content marketing right, and many will embarrass themselves. The ones who’ll get it right will realize it’s a full-time task. It’s more than a full-time task. It’s a way of thinking. It’s an editorial sensibility. The folks at Priceonomics spend as much time writing, editing, and investing in their blog as they do their data-analytics services. If Google Ventures is going to fulfill its very exciting ambitions in the content space, it’s going to need to elevate content to the forefront of what it does, right alongside investing.

Content can be king, but if it’s going to be king for you, then you need to treat it like royalty. Take it as seriously as anything else. Don’t half-ass it. Bad content marketing is blatantly obvious to all who come across it, and it’ll actually hurt your company. Great content will do wonders for your company. But you’re going to need to commit to it and commit fully. If your company wants to do content marketing, then everyone at your company should be prepared to chip in every week. Including your CEO. Making world-class content takes a ridiculous amount of time and effort, and the bar for world-class will be raised in the years to come.

Niaz: You’re the co-author of a Harvard Business School case study on Netflix and its use of collaborative filtering technology to disrupt traditional models of consumer discovery and consumption of entertainment.With the massive entrance and existence of Google, Apple, and Amazon into content business, how do you think that will affect the future of Netflix?

Jon: To understand Netflix’s situation right now, it helps to understand HBO’s situation 15-20 years ago. HBO—the acronym stands for “Home Box Office”—started out licensing and replaying movies. That’s it. It was a distributor of movies shortly after their theatrical release, and before their home video release. And that was a brilliant business model in the days when windowing mattered a great deal, and there were few other ways to see movies after they’d left theaters. But HBO had to adapt as the years went by. Other networks popped up with similar business models. The DVD player came along and revitalized the home video market. The internet was starting to provide rough, but credible means for getting one’s hands on movies. Local TV stations were getting more aggressive about licensing first-run movies. And so HBO needed to create original content. It started with documentaries, then moved up the value chain to original, scripted series. And it focused a hell of a lot of money, time, and effort to ensure that it’s series were great. HBO’s executives in the early 1990s would hardly recognize the HBO of today, and vice versa. Today’s HBO is best described as a premium TV-show network, and not a premium movie-licensing network.

Netflix is in a similar situation. It got to where it is today by being the most convenient, optimized, consumer-friendly way to watch movies and TV shows. But networks and studios realized that Netflix was a threat to their business model, and they started threatening Netflix with higher licensing fees. Some pulled their content altogether. And so Netflix faced a choice: fight tooth and nail to be a commodity provider of everyone else’s content, or start developing exclusive, original content of its own. And it’s started to diversify its mix with the latter. The problem is, now Netflix is in the hit-driven business of TV development. It might spend $100 million on a show that flops. Or it might spend $100 million on a show that temporarily drives subscriptions and maintains customer loyalty, but whose run expires in a few years. Meanwhile, it’s still spending close to a billion dollars a year licensing everyone else’s content. Netflix’s operating costs are going to skyrocket in the years to come. At the same time, Netflix is still the most convenient and ubiquitous way for many, many people to get the shows they want to see.

Apple doesn’t seem to have the taste for developing original shows, nor do most analysts think it should. I’d probably agree (for now). Amazon has the muscle and the clout to compete with Netflix, but its efforts in the originals-development space have been lackluster to date. Friends within and without the company tell me it’s not taking development as seriously as it could. But that doesn’t mean it can’t, or that it won’t. Google is a very interesting dark horse. It owns the “low end” with YouTube, and that low-end will be very lucrative. Meanwhile, it’s building out its own infrastructure with Fiber, and its own platforms with Chrome and Android. All it needs to do now is shell out the cash on originals and on premium licenses—but we’d be talking hundreds of millions, and possibly even billions, to outcompete Netflix with Hollywood-quality programming. To date, Google hasn’t really shown the desire or the capacity to do that. It’s had a lot of false starts inking expensive deals with celebrities, writers, and producers—but very little has come of that. As I mentioned earlier, content is an all-or-nothing proposition. You’re going big or you’re going home. Google can go big, but it needs to go quite big, and I think it’s been a little scared of just how big “big” really is.

Niaz: I believe Apple’s purchase of Beats is a pretty big deal when we consider the integration of culture and creativity. We have seen both culture and creativity are at the heart of Apple’s whole ecosystem. At the same time, Beats will give Apple access to a different customer segment that is pretty huge not only for music but also for healthcare. I am excited to see some integration of Beats Headphone with Apple’s healthcare in near future. On the other hand, executives like Jimmy Lovine and Dr. Dre, will make Apple’s path a lot easier to play big game in content business. Can you please tell us about your ideas and takes on Apple’s purchase of Beats? What new innovations do you expect to see from the integration of both Beat and Apple’s ecosystem?

Jon: I wrote a bit about Apple and Beats in Slate recently, and the long and short of it is this: I think it was a smart buy. Apple needed a streaming service; it needed to diversify its customer base; it needed to establish credibility in the creative community and in Hollywood to place itself on competitive footing with Amazon and its other competitors, real or putative. And it gets some high-margin, bestselling hardware as part of the package. The icing on the cake is that Apple was sitting on a literal mountain of cash, partly because there are almost no great ways to get a respectable return on cash right now in any market. So this was a good, productive use of free cash.

How will Iovine and Dre get involved? A lot of people are speculating that they’ll start a sort of mini-studio within Apple, commissioning original content. That has never really been a focus of Apple’s, but it would be very interesting to see. The thing is, everyone needs originals right now. Everyone needs exclusives. Apple’s strategy, to date, has been to let its platform (iOS) be the soil in which developers plant and nurture the seeds. And I believe Apple will still operate a content business from that worldview. You won’t see Apple producing its own shows, but you may well see Apple shelling out serious money for exclusive distribution windows, or for first-look deals, or maybe even for first-run programming. But other people will make those shows for Apple. Apple won’t make them itself.

Niaz: Let’s talk about WWDC. Apple has announced iOS8 and OS X 10.10 Yosemite in WWDC 2014 in addition to some other major updates. With all these great new updates, it seems inevitable that they will be accompanied by larger screens on the iPhone, iWatch, and probably Apple TV later this year. What has fascinated me most is that Tim Cook has been able to transform Apple and make it his own in such a short amount of time. It seems like Apple is ready to kick start again with remarkable products and services. I have seen some hints of new product from Eddy Cue, Apple’s senior vice president of Internet Software and Services, at Code conference. What are you takes on WWDC? What do you think about all these new updates?

Jon: I was very excited by WWDC, and I would echo a lot of the sentiments coming out of the Apple blogosphere. I am very excited by the expansive platform potential of iOS. It could well become Apple’s Windows, ironically enough: a ubiquitous operating system that is embedded into, plays with, or powers everyone else’s hardware. The difference between the Windows era and the iOS era, of course, is that Apple is a hardware company—so any distributed ecosystem involving iOS would, by necessity, mean every other device merely uses iOS, but you’ll need Apple devices to control them all. Apple devices will be the hub, and everyone else will be a spoke.

Niaz: With the release of latest iPhone 5C, entrance in a new market, new openings of Apple stores globally, and overall performance in China and Japan, it seems like Apple is going truly global with massive scale. What do you think the future holds for Apple, a company with $600 billion market cap, $45.6 billion in quarterly revenue, and a 39.3 percent gross margin? Should they focus on becoming dominant in entertainment and communication or expand their products and services to other things?  And how will Apple’s competitors compete with this massive scale of product, service, content, and global distribution?

Jon: I mentioned how Apple envisions a future in which it’s the hub, and everyone else is a spoke. Well, that future is by no means assured. Google is putting up very credible competition. Apple is selling a remarkable number of devices in markets like China, and nominally speaking, it’s growing. But worldwide, its rate of growth might be slowing. So the question will soon become: how does Apple transition from its current growth model—putting an iDevice into everyone’s hands—toward a more mature growth model, capturing the value from all those iDevices in all those hands? Sooner or later, there will be a limit to how many device refreshes consumers will tolerate at Apple’s margins. That’s why Apple is getting increasingly serious about iOS as a platform, to ensure the continued necessity of iDevice refreshes.

It’s somewhat fashionable, once again, to look toward a future of slowed, or at least less explosive iDevice sales growth, and predict doom and gloom for Apple. I think that’s a simplistic view. Apple isn’t going anywhere. But it’s in transition. Apple is maturing as a company, and its mature business model is going to look more steady, more stable, and less notionally explosive than its model has over the last decade. I don’t think that’s a bad thing; it’s the aftereffect of so much success for so long. Apple has planted the world’s lushest orchard; now it’s got to make something of the fruit.

Niaz: As you know, the smartphone industry has been facing fierce head to head competition, and now Amazon is entering the ring with the release of Fire Phone. In a recent interview with the New York Times, Jeff Bezos, Amazon’s Chief Executive, asserts, ‘I think in the whole evolution of this [smartphone], we’re still pretty early’. Do you agree that Amazon’s arrival in the smartphone industry is pretty early when we have started imaging a world without any device like a smartphone? What is your overall evaluation on Amazon’s Fire Phone? How do you feel about its exclusivity with AT&T? Is it going to be huge? What further steps should Amazon take to compete with other smartphones?

 Jon: It’s important to place the Fire Phone not just in the context of the smartphone market, but also in the context of Amazon’s corporate strategy.

Let’s think back to the tail end of the last decade and the beginning of this one. Amazon is the king of ecommerce. It’s the world’s largest bookseller, and it’s a credible force—if not necessarily an undisputed leader—in movies, video games, music, and other entertainment categories. Along comes Apple with iOS, and eventually the iPad. Suddenly, Amazon is facing a serious threat to its book and entertainment businesses. So it releases the Kindle, a purpose-built book reader. It turns out that no one’s satisfied with a purpose-built book reader. A book reader is insufficient to compete with more feature-complete hardware like the iPad (and the emerging Android device ecosystem). So Amazon releases the Kindle Fire, a full-featured device. But that’s not enough. Amazon feels it needs a full mobile hardware platform. Hence, the Fire Phone.

There are some problems here, not the least of which is that nobody has been able to crack the Apple/Google stranglehold on the mobile device market in a serious way. Fire Phone, like the Fire tablet, might wind up a day late and a dollar short. At the same time, Amazon needs to do something. The future of books, games, movies, TV, and music is probably streaming or subscription services, and that’s all going to happen outside of Amazon—on other people’s apps and on other people’s devices—unless Amazon figures out a way to own the point-of-purchase customer relationship. So it’s trying to do that with hardware. I’m not sure that’s necessary; I think Amazon could do just as well positioning itself as the premiere shopping, streaming, and media consumption app on everyone else’s devices. But the present-day competitive landscape makes that very hard to do. Every hardware platform wants to own the point-of-purchase for content, too.

Jeff Bezos is probably the smartest CEO in the entire country, and high in the running for smartest in the world. He’s the most brilliant retail mind since Sam Walton. He may be the best pure businessperson of our generation. If anyone can figure out a way to crack this space, he can. But if he’s serious about hardware, he’ll need to figure out how to add something new and exciting to his hardware. Something exclusive. Retail is all about price, selection, and convenience. Hardware is still very much about razzle-dazzle. Amazon has never been a razzle-dazzle company. Amazon released the Kindle because it needed a reader. Amazon released the Fire because it needed a full-featured tablet. It can’t just release a phone because it needs a phone. Consumers need more than that.

But I agree with Bezos’s assertion that the smartphone market is still in its infancy. The best is yet to come. But Amazon will need to deliver the best—stuff we’ve not even thought of yet—if it’s going to make a serious bid for a place at the table.

Niaz: What do you think about the Future of Social Media? How things are going to evolve with Facebook and Twitter? We have text (Twitter), photos (Instagram), videos (Vine), and the combination (Facebook); what’s the next platform for social media? Should we expect additions to social media or the simplification/streamlining of it?

 Jon: Two major, semi-competing forces are going to shape social media in the next few years. The first is unbundling. Facebook, Twitter, and other players are going to put out, or buy, dozens of single-purpose apps and networks in an attempt to occupy as much real estate on your home screen as they can. Because they know your attention span is limited, and that the home screen is all-important. The second force is what I’ll call app fatigue, or perhaps more accurately, marginal app utility. There comes a point where people have more apps than they know what to do with, and hence, apps that get relegated outside the home screen are going to fall by the wayside. This creates a countervailing pressure to make your core app as relevant as possible, so that it maintains its place in the user’s daily mindset, and occupies the Fifth Avenue real estate that is the home screen, or better yet, the dock.

A lot of people mocked Facebook’s acquisitions of Instagram and WhatsApp, but Facebook is keenly aware that Instagram and WhatsApp are home screen apps for hundreds of millions of people. That’s why it hasn’t shut those apps down and integrated them into the Facebook app. Instagram is probably the single most important app to most young people’s lives, and Facebook would have been crazy to kill it or marginalize it. It will go down in recent history as the smartest acquisition Facebook has ever made, and the decision to keep it quasi-independent was a very smart move.

Niaz: As you have seen there are hundreds of sites, apps and platforms dedicate to content curation. What do you think about content curation? Are we going to have some kind of social media that’s exclusively dedicated to curation?

 Jon: Curation is increasingly necessary in a world with more content than we know what to do with. How do I sort through the pile? How do I find things I’ll like? As an app developer, how in the heck do I get my app in front of the people who’ll like it? Let me tell you: we haven’t even begun to see the future of curation. It’s an important one. Apps, content, and entertainment will be curated through all manner of interesting means: tailored or self-tailored subscriptions, influencers, collaborative filtering methods and other algorithms, tastemakers, lists, and category-centric curation apps.

If someone can become the Google search of the app world, or the Netflix of the app world, or even the New York Times book review of the app world, these are very valuable and very lucrative things to become.

Niaz: What do you think about Silicon Valley? Is it a mindset or something very special? Do you foresee Silicon Valley expanding or rather replicating in other areas around the world/country?

 Jon: Silicon Valley has succeeded because it’s Silicon Valley. That sounds tautological and circular. But it’s important to understand what makes Silicon Valley work if we’re to understand how other locations—or, as I think is more likely, how a more global, distributed system—can replicate it. Silicon Valley has several of the world’s leading technical universities situated in its back yard. It has received decades of investment and government support. It has an unprecedented concentration of risk-seeking capital. It has a feedback loop of successful founders and funders, each of whom plows money, connections, and expertise back into the system. And it has a big tolerance for exploration, for failure, and for dangerously innovative thinking.

Now, none of those things in isolation is sufficient to replicate the whole. But some of those things came from the others. A playbook for replicating Silicon Valley should start with capital, government support (but not government prescription), and top-tier university research and cooperation. In fact, I think it’s virtually impossible to recreate Silicon Valley in a single location in the absence of a world-class technical university. This is why you see the new Silicon Valleys—the ones that actually have a shot at replicating the entire SV ecosystem—springing up in fertile soil that has all the right characteristics, including strong academic systems. Places like Israel, for instance.

But in some cases, I think the race to rebuild, replace, or create anew Silicon Valley is a half-step. The new Silicon Valley will be a distributed ecosystem, powered by services like AngelList and FundersClub, in cooperation with universities and institutions, with distributed access to talent, capital, and mentors. Conventional wisdom holds that you need to concentrate all of these things in one place. I’d say that’s still nominally true, but it can be done virtually. What Amazon Web Services was to the server, so will distributed access be to geographic and physical concentration of the necessary resources.

Niaz: Any last comment?

 Jon: As a content person, and as an entertainment person, I’m always on the lookout for people trying new and exciting things in these spaces. I have no desire to “disrupt” Hollywood, but I have a strong desire to shake it up a little, and to direct its energies toward more forward-thinking and customer-centric means of creation and distribution. I’m always happy to chat with entrepreneurs in any space, but in particular, I’d love to talk to anyone and everyone thinking about this space. Feel free to hit me up anytime on Twitter (@jonnathanson) or via email (jonfnathanson @ gmail.com)

Niaz: Thanks a lot for joining and sharing with us your great ideas, insights, and knowledge. We are wishing you good luck for all of your upcoming great endeavors.

 Jon: Thanks so much for having me! I am a big fan of your interviews, and I am honored to have talked with you.

_  _  _  _  ___  _  _  _  _

Further Reading:

1. Horace Dediu on Asymco, Apple and Future of Computing

2. Irving Wladawsky-Berger on Evolution of Technology and Innovation

3. James Allworth on Disruptive Innovation

4. Gerd Leonhard on Big Data and the Future of Media, Marketing and Technology

5. James Kobielus on Big Data, Cognitive Computing and Future of Product

6. Viktor Mayer-Schönberger on Big Data Revolution

7. Ely Kahn on Big Data, Startup and Entrepreneurship

8. Brian Keegan on Big Data

9. danah boyd on Future of Technology and Social Media

James Kobielus: Big Data, Cognitive Computing and Future of Product

Editor’s Note: As IBM’s Big Data Evangelist, James Kobielus is IBM Senior Program Director, Product Marketing, Big Data Analytics Solutions. He is an industry veteran, a popular speaker and social media participant, and a thought leader in Big Data, Hadoop, Enterprise Data Warehousing, Advanced Analytics, Business Intelligence, Data Management, and Next Best Action Technologies. He works with IBM’s product management and marketing teams in Big Data. He has spoken at such leading industry events as IBM Information On Demand, IBM Big Data Integration and Governance, Hadoop Summit, Strata, and Forrester Business Process Forum. He has published several business technology books and is a very popular provider of original commentary on blogs and many social media.

To learn more about his research, works, ideas, theories and knowledge, please check this this this this this this and this out.

eTalk’s Niaz Uddin has interviewed James Kobielus recently to gain insights about his ideas, research and works in the field of Big Data which is given below.

Niaz: Dear James, thank you so much for joining us in the midst of your busy schedule. We are very thrilled and honored to have you at eTalks.

James: And I’m thrilled and honored that you asked me.

Niaz: You are a leading expert on Big Data, as well as on such enabling technologies as enterprise data warehousing, advanced analytics, Hadoop, cloud services, database management systems, business process management, business intelligence, and complex-event processing. At the beginning of our interview can you please tell us about Big Data? How does Big Data make sense of the new world?

James: Big Data refers to approaches for extracting deep value from advanced analytics and trustworthy data at all scales. At the heart of advanced analytics is data mining, which is all about using statistical analysis to find non-obvious patterns (segmentations, correlations, trends, propensities, etc.) within historical data sets.

Some might refer to advanced analytics as tools for “making sense” of this data in ways that are beyond the scope of traditional reporting and visualization. As we aggregate and mine a wider variety of data sources, we can find far more “sense”–also known as “insights”–that previously lay under the surface. Likewise, as we accumulate a larger volume of historical data from these sources and incorporate a wider variety of variables from them into our models, we can build more powerful predictive models of what might happen under various future circumstances. And if we can refresh this data rapidly with high-velocity high-quality feeds, while iterating and refining our models more rapidly, we can ensure that our insights reflect the latest, greatest data and analytics available.

That’s the power of Big Data: achieve more data-driven insights (aka “making sense”) by enabling our decision support tools to leverage the “3 Vs”: a growing Volume of stored data, higher Velocity of data feeds, and broader Variety of data sources.

Niaz: As you know, Big Data has already started to redefine search, media, computing, social media, products, services and so on. Availability of Data helping us analyzing trend and doing interesting things in more accurate and efficient ways than before. What are some of the most interesting uses of big data out there today?

James: Where do I start? There are interesting uses of Big Data in most industries and in most business functions.

I think cognitive computing applications of Big Data are among the most transformative tools in modern business.

Cognitive computing is a term that probably goes over the head of most of the general public. IBM defines it as the ability of automated systems to learn and interact naturally with people to extend what either man or machine could do on their own, thereby helping human experts drill through big data rapidly to make better decisions.

One way I like to describe cognitive computing is as the engine behind “conversational optimization.” In this context, the “cognition” that drives the “conversation” is powered by big data, advanced analytics, machine learning and agile systems of engagement. Rather than rely on programs that predetermine every answer or action needed to perform a function or set of tasks, cognitive computing leverages artificial intelligence and machine learning algorithms that sense, predict, infer and, if they drive machine-to-human dialogues, converse.

Cognitive computing performance improves over time as systems build knowledge and learn a domain’s language and terminology, its processes and its preferred methods of interacting. This is why it’s such a powerful conversation optimizer. The best conversations are deep in give and take, questioning and answering, tackling topics of keenest interest to the conversants. When one or more parties has deep knowledge and can retrieve it instantaneously within the stream of the moment, the conversation quickly blossoms into a more perfect melding of minds. That’s why it has been deployed into applications in healthcare, banking, education and retail that build domain expertise and require human-friendly interaction models.

IBM Watson is one of the most famous exemplars of the power of cognitive computing driving agile human-machine conversations.  In its famous “Jeopardy!” appearance, Watson illustrated how its Deep Question and Answer technology—which is cognitive computing to the core—can revolutionize the sort of highly patterned “conversation” characteristic of a TV quiz show. By having its Deep Q&A results rendered (for the sake of that broadcast) in a synthesized human voice, Watson demonstrated how it could pass (and surpass) any Turing test that tried to tell whether it was a computer rather than, say, Ken Jennings. After all, the Turing test is conversational at its very core.

What’s powering Watson’s Deep Q&A technology is an architecture that supports an intelligent system of engagement. Such an architecture is able to mimic real human conversation, in which the dialogue spans a broad, open domain of subject matter; uses natural human language; is able to process complex language with a high degree of accuracy, precision and nuance; and operates with speed-of-thought fluidity.

Where the “Jeopardy!” conversational test was concerned (and where the other participants were humans literally at the top of that game), Watson was super-optimized. However, in the real-world of natural human conversation, the notion of “conversation optimization” might seem, at first glance, like a pointy-headed pipedream par excellence. However, you don’t have to be an academic sociologist to realize that society, cultures and situational contexts impose many expectations, constraints and other rules to which our conversations and actions must conform (or face disapproval, ostracism, or worse). Optimizing our conversations is critical to surviving and thriving in human society.

Wouldn’t it be great to have a Watson-like Deep Q&A adviser to help us understand the devastating faux pas to avoid and the right bon mot to drop into any conversation while we’re in the thick of it? That’s my personal dream and I’ll bet that before long, with mobile and social coming into everything, it will be quite feasible (no, this is not a product announcement—just the dream of one IBMer). But what excites me even more (and is definitely not a personal pipedream), is IBM Watson Engagement Advisor, which we unveiled earlier this year. It is a cognitive-computing assistant that revolutionizes what’s possible in multichannel B2C conversations. The  solution’s “Ask Watson” feature uses Deep Q&A to greet customers, conduct contextual conversations on diverse topics, and ensure that the overall engagement is rich with answers, guidance and assistance.

Cognitive/conversational computing is also applicable to “next best action,” which is one of today’s hottest new focus areas in intelligent systems. At its heart, next best action refers to an intelligent infrastructure that optimizes agile engagements across many customer-facing channels, including portal, call center, point of sales, e-mail and social. With cognitive-computing infrastructure the silent assistant, customers engage in a never-ending whirligig of conversations with humans and, increasingly, with automated bots, recommendation engines and other non-human components that, to varying degrees, mimic real-human conversation.

Niaz: So do you think machine learning is the right way to analyze Big Data?

James: Machine learning is an important approach for extracting fresh insights from unstructured data in an automated fashion, but it’s not the only approach. For example, machine learning doesn’t eliminate the need for data scientists to build segmentation, regression, propensity, and other models for data mining and predictive analytics.

Fundamentally, machine learning is a productivity tool for data scientists, helping them to get smarter, just as machine learning algorithms can’t get smarter without some ongoing training by data scientists. Machine learning allows data scientists to train a model on an example data set, and then leverage algorithms that automatically generalize and learn both from that example and from fresh feeds of data. To varying degrees, you’ll see the terms “unsupervised learning,” “deep learning,” “computational learning,” “cognitive computing,” “machine perception,” “pattern recognition,” and “artificial intelligence” used in this same general context.

Machine learning doesn’t mean that the resultant learning is always superior to what human analysts might have achieved through more manual knowledge-discovery techniques. But you don’t need to believe that machines can think better than or as well as humans to see the value of machine learning. We gladly offload many cognitive processes to automated systems where there just aren’t enough flesh-and-blood humans to exercise their highly evolved brains on various analytics tasks.

Niaz:What are the available technologies out there those help profoundly to analyze data? Can you please briefly tell us about Big Data technologies and their important uses?

James: Once again, it’s a matter of “where do I start?” The range of Big Data analytics technologies is wide and growing rapidly. We live in the golden age of database and analytics innovation. Their uses are everywhere: in every industry, every business function, and every business process, both back-office and customer-facing.

For starters, Big Data is much more than Hadoop. Another big data “H”—hybrid—is becoming dominant, and Hadoop is an important (but not all-encompassing) component of it. In the larger evolutionary perspective, big data is evolving into a hybridized paradigm under which Hadoop, massively parallel processing enterprise data warehouses, in-memory columnar, stream computing, NoSQL, document databases, and other approaches support extreme analytics in the cloud.

Hybrid architectures address the heterogeneous reality of big data environments and respond to the need to incorporate both established and new analytic database approaches into a common architecture. The fundamental principle of hybrid architectures is that each constituent big data platform is fit-for-purpose to the role for which it’s best suited. These big data deployment roles may include any or all of the following: data acquisition, collection, transformation, movement, cleansing, staging, sandboxing, modeling, governance, access, delivery, archiving, and interactive exploration. In any role, a fit-for-purpose big data platform often supports specific data sources, workloads, applications, and users.

Hybrid is the future of big data because users increasingly realize that no single type of analytic platform is always best for all requirements. Also, platform churn—plus the heterogeneity it usually produces—will make hybrid architectures more common in big data deployments.

Hybrid deployments are already widespread in many real-world big data deployments. The most typical are the three-tier—also called “hub-and-spoke”—architectures. These environments may have, for example, Hadoop (e.g., IBM InfoSphere BigInsights) in the data acquisition, collection, staging, preprocessing, and transformation layer; relational-based MPP EDWs (e.g., IBM PureData System for Analytics) in the hub/governance layer; and in-memory databases (e.g., IBM Cognos TM1) in the access and interaction layer.

The complexity of hybrid architectures depends on range of sources, workloads, and applications you’re trying to support. In the back-end staging tier, you might need different preprocessing clusters for each of the disparate sources: structured, semi-structured, and unstructured.

In the hub tier, you may need disparate clusters configured with different underlying data platforms—RDBMS, stream computing, HDFS, HBase, Cassandra, NoSQL, and so on—-and corresponding metadata, governance, and in-database execution components.

And in the front-end access tier, you might require various combinations of in-memory, columnar, OLAP, dimensionless, and other database technologies to deliver the requisite performance on diverse analytic applications, ranging from operational BI to advanced analytics and complex event processing.

Niaz: That’s really amazing. How to you connect these two dots: Big Data Analytics and Cognitive Computing? How does this connection make sense?

James: The relationship between Cognitive computing and Big Data is simple. Cognitive computing is an advanced analytic approach that helps humans drill through the unstructured data within Big Data repositories more rapidly in order to see correlations, patterns, and insights more rapidly.

Think of cognitive computing as a “speed-of-thought accelerator.” Speed of thought is something we like to imagine operates at a single high-velocity setting. But that’s just not the case. Some modes of cognition are painfully slow, such as pondering the bewildering panoply of investment options available under your company’s retirement plan. But some other modes are instantaneous, such as speaking your native language, recognizing an old friend, or sensing when your life may be in danger.

None of this is news to anybody who studies cognitive psychology has followed advances in artificial intelligence, aka AI, over the past several decades. Different modes of cognition have different styles, speeds, and spheres of application.

When we speak of “cognitive computing,” we’re generally referring to the ability of automated systems to handle the conscious, critical, logical, attentive, reasoning mode of thought that humans engage in when they, say, play “Jeopardy!” or try to master some rigorous academic discipline. This is the “slow” cognition that Nobel-winning psychologist/economist Daniel Kahneman discussed in recent IBM Colloquium speech.

As anybody who has ever watched an expert at work will attest, this “slow” thinking can move at lightning speed when the master is in his or her element. When a subject-domain specialist is expounding on their field of study, they often move rapidly from one brilliant thought to the next. It’s almost as if these thought-gems automatically flash into their mind without conscious effort.

This is the cognitive agility that Kahneman examined in his speech. He described the ability of humans to build skills, which involves mastering “System 2″ cognition (slow, conscious, reasoning-driven) so that it becomes “System 1″ (fast, unconscious, action-driven). Not just that, but an expert is able to switch between both modes of thought within the moment when it becomes necessary to rationally ponder some new circumstance that doesn’t match the automated mental template they’ve developed. Kahneman describes System 2 “slow thinking” as well-suited for probability-savvy correlation thinking, whereas System 1 “fast thinking” is geared to deterministic causal thinking.

Kahneman’s “System 2″ cognition–slow, rule-centric, and attention-dependent–is well-suited for acceleration and automation on big data platforms such as IBM Watson. After all, a machine can process a huge knowledge corpus, myriad fixed rules, and complex statistical models far faster than any mortal. Just as important, a big-data platform doesn’t have the limited attention span of a human; consequently, it can handle many tasks concurrently without losing its train of thought.

Also, Kahneman’s “System 1″ cognition–fast, unconscious, action-driven–is not necessarily something we need to hand to computers alone. We can accelerate it by facilitating data-driven interactive visualization by human beings, at any level of expertise. When a big-data platform drives a self-service business intelligence application such as IBM Cognos, it can help users to accelerate their own “System 1″ thinking by enabling them to visualize meaningful patterns in a flash without having to build statistical models, do fancy programming, or indulge in any other “System 2″ thought.

And finally, based on those two insights, it’s clear to me that cognitive computing is not simply limited to the Watsons and other big-data platforms of the world. Any well-architected big data, advanced analytics, or business intelligence platform is essentially a cognitive-computing platform. To the extent it uses machines to accelerate the slow “System 2″ cognition and/or provides self-service visualization tools to help people speed up their wetware’s “System 1″ thinking, it’s a cognitive-computing platform.

Now I will expand upon the official IBM definition of “cognitive computing” to put it in a larger frame of reference. As far as I’m concerned, the core criterion of cognitive computing is whether the system, however architected, has the net effect of speeding up any form of cognition, executing on hardware and/or wetware.

Niaz: How is Big Data Analytics changing the nature of building great products? What do you think about the future of products?

James: That’s a great question that I haven’t explored too much extent. My sense is that more “products” are in fact “services”–such as online media, entertainment, and gaming–that, as an integral capability, feed on the Big Data generated by its users. Companies tune the designs, interaction models, and user experiences of these productized services through Big Data analytics. To the extent that users respond or don’t respond to particular features of these services, that will be revealed in the data and will trigger continuous adjustments in product/service design. New features might be added on a probationary basis, to see how users respond, and just as quickly withdraw or ramped up in importance.

This new product development/refinement loop is often referred to as “real-world experiments.” The process of continuous, iterative, incremental experimentation both generates and depends on a steady feed of Big Data. It also requires data scientists to play a key role in the product-refinement cycle, in partnership with traditional product designers and engineers.  Leading-edge organizations have begun to emphasize real-world experiments as a fundamental best practice within their data-science, next-best-action, and process-optimization initiatives.

Essentially, real-world experiments put the data-science “laboratory” at the heart of the big data economy.  Under this approach, fine-tuning of everything–business model, processes, products, and experiences–becomes a never-ending series of practical experiments. Data scientists evolve into an operational function, running their experiments–often known as “A/B tests”–24×7 with the full support and encouragement of senior business executives.

The beauty of real-world experiments is that you can continuously and surreptitiously test diverse product models inline to your running business. Your data scientists can compare results across differentially controlled scenarios in a systematic, scientific manner. They can use the results of these in-production experiments – such as improvements in response, acceptance, satisfaction, and defect rates on existing products/services–to determine which work best with various customers under various circumstances.

Niaz: What is a big data product? How can someone make beautiful stuff with data?

James: What is a Big Data product? It’s any product or service that helps people to extract deep value from advanced analytics and trustworthy data at all scales, but especially at the extreme scales of volume (petabytes and beyond), velocity (continuous, streaming, real-time, low-latency), and/or variety (structured, semi-structured, unstructured, streaming, etc.). That definition encompasses products that provide the underlying data storage, database management, algorithms, metadata, modeling, visualization, integration, governance, security, management, and other necessary features to address these use cases. If you track back to my answer above relevant to “hybrid” architectures you’ll see a discussion of some of the core technologies.

Making “beautiful stuff with data”? That suggests advanced visualization to call out the key insights in the data. The best data visualizations provide functional beauty: they make the process of sifting through data easier, more pleasant, and more productive for end users, business analysts, and data scientists.

Niaz: Can you please tell us about building Data Driven culture that posters data driven innovation to build next big product?

James: A key element of any data-driven culture is establishing a data science center of excellence. Data scientists are the core developers in this new era of Big Data, advanced analytics, and cognitive computing.

Game-changing analytics applications don’t spring spontaneously from bare earth. You must plant the seeds through continuing investments in applied data science and, of course, in the big data analytics platforms and tools that bring it all to fruition. But you’ll be tilling infertile soil if you don’t invest in sustaining a data science center of excellence within your company. Applied data science is all about putting the people who drill the data in constant touch with those who understand the applications. In spite of the mythology surrounding geniuses who produce brilliance in splendid isolation, smart people really do need each other. Mutual stimulation and support are critical to the creative process, and science, in any form, is a restlessly creative exercise.

In establishing a center of excellence, you may go the formal or informal route. The formal approach is to institute ongoing process for data-science collaboration, education, and information sharing. As such, the core function of your center of excellence might be to bridge heretofore siloed data-science disciplines that need to engage more effectively. The informal path is to encourage data scientists to engage with each other using whatever established collaboration tools, communities, and confabs your enterprise already has in place. This is the model under which centers of excellence coalesce organically from ongoing conversations.

Creeping polarization, like general apathy, will kill your data science center of excellence if you don’t watch out. Don’t let the center of excellence, formal or informal, degenerate into warring camps of analytics professionals trying to hardsell their pet approaches as the one true religion. Centers of excellence must serve as a bridge, not a barrier, for communication, collegiality, and productivity in applied data science.

Niaz: As you know leaders and managers have always been challenged to get the right information to make good decisions. Now with the digital revolution and technological advancement, they have opportunities to access huge amount of data. How this trend will change management practice? What do you think about the future of decision making, strategy and running organizations?

James: Business agility is paramount in a turbulent world.  Big Data is changing the way that management responds to–and gets ahead–of changes in their markets, competitive landscape, and operational conditions.

Increasingly, I prefer to think of big data in the broader context of business agility. What’s most important is that your data platform has the agility to operate cost-effectively at any scale, speed, and scope of business that your circumstances demand.

In terms of scale of business, organizations operate at every scale from breathtakingly global to intensely personal. You should be able to acquire a low-volume data platform and modularly scale it out to any storage, processing, memory and I/O capacity you may need in the future. Your platform should elastically scale up and down as requirements oscillate. Your end-to-end infrastructure should also be able to incorporate platforms of diverse scales—petabyte, terabyte, gigabyte, etc.—with those platforms specialized to particular functions and all of them interoperating in a common fabric.

Where speed is concerned, businesses often have to keep pace with stop-and-start rhythms that oscillate between lightning fast and painfully slow. You should be able to acquire a low-velocity data platform and modularly accelerate it through incorporation of faster software, faster processors, faster disks, faster cache and more DRAM as your need for speed grows. You should be able to integrate your data platform with a stream computing platform for true real-time ingest, processing and delivery. And your platform should also support concurrent processing of diverse latencies, from batch to streaming, within a common fabric.

And on the matter of scope, businesses manage almost every type of human need, interaction and institution. You should be able to acquire a low-variety data platform—perhaps a RDBMS dedicated to marketing—and be able to evolve it as needs emerge into a multifunctional system of record supporting all business functions. Your data platform should have the agility to enable speedy inclusion of a growing variety of data types from diverse sources. It should have the flexibility to handle structured and unstructured data, as well as events, images, video, audio and streaming media with equal agility. It should be able to process the full range of data management, analytics and content management workloads. It should serve the full scope of users, devices and downstream applications.

Agile Big Data platforms can serve as the common foundation for all of your data requirements. Because, after all, you shouldn’t have to go big, fast, or all-embracing in your data platforms until you’re good and ready.

Niaz: In your opinion, given the current available Big Data technologies, what is the most difficult challenge in filtering big data to find useful information?

James: The most difficult challenge is in figuring out which data to ignore, and which data is trustworthy enough to serve as a basis for downstream decision-support and advanced analytics.

Most important, don’t always trust the “customer sentiment” that you social-media listening tools as if it were gospel. Yes, you care deeply about how your customers regard your company, your products, and your quality of service. You may be listening to social media to track how your customers—collectively and individually—are voicing their feelings. But do you bother to save and scrutinize every last tweet, Facebook status update, and other social utterance from each of your customers? And if you are somehow storing and analyzing that data—which is highly unlikely—are you linking the relevant bits of stored sentiment data to each customer’s official record in your databases?

If you are, you may be the only organization on the face of the earth that makes the effort. Many organizations implement tight governance only on those official systems of record on which business operations critically depend, such as customers, finances, employees, products, and so forth. For those data domains, data management organizations that are optimally run have stewards with operational responsibility for data quality, master data management, and information lifecycle management.

However, for many big data sources that have emerged recently, such stewardship is neither standard practice nor should it be routine for many new subject-matter data domains. These new domains refer to mainly unstructured data that you may be processing in your Hadoop clusters, stream-computing environments, and other big data platforms, such as social, event, sensor, clickstream, geospatial, and so on.

The key difference from system-of-record data is that many of the new domains are disposable to varying degrees and are not regarded as a single version of the truth about some real-world entity. Instead, data scientists and machine learning algorithms typically distill the unstructured feeds for patterns and subsequently discard the acquired source data, which quickly become too voluminous to retain cost-effectively anyway. Consequently, you probably won’t need to apply much, if any, governance and security to many of the recent sources.

Where social data is concerned, there are several reasons for going easy on data quality and governance. First of all, data quality requirements stem from the need for an officially sanctioned single version of the truth. But any individual social media message constituting the truth of how any specific customer or prospect feels about you is highly implausible. After all, people prevaricate, mislead, and exaggerate in every possible social context, and not surprisingly they convey the same equivocation in their tweets and other social media remarks. If you imagine that the social streams you’re filtering are rich founts of only honest sentiment, you’re unfortunately mistaken.

Second, social sentiment data rarely has the definitive, authoritative quality of an attribute—name, address, phone number—that you would include in or link to a customer record. In other words, few customers declare their feelings about brands and products in the form of tweets or Facebook updates that represent their semiofficial opinion on the topic. Even when people are bluntly voicing their opinions, the clarity of their statements is often hedged by the limitations of most natural human language. Every one of us, no matter how well educated, speaks in sentences that are full of ambiguity, vagueness, situational context, sarcasm, elliptical speech, and other linguistic complexities that may obscure the full truth of what we’re trying to say. Even highly powerful computational linguistic algorithms are challenged when wrestling these and other peculiarities down to crisp semantics.

Third, even if every tweet was the gospel truth about how a customer is feeling and all customers were amazingly articulate on all occasions, the quality of social sentiment usually emerges from the aggregate. In other words, the quality of social data lies in the usefulness of the correlations, trends, and other patterns you derive from it. Although individual data points can be of marginal value in isolation, they can be quite useful when pieced into a larger puzzle.

Consequently, there is little incremental business value from scrutinizing, retaining, and otherwise managing every single piece of social media data that you acquire. Typically, data scientists drill into it to distill key patterns, trends, and root causes, and you would probably purge most of it once it has served its core tactical purpose. This process generally takes a fair amount of mining, slicing, and dicing. Many social-listening tools, including the IBM® Cognos® Consumer Insight application, are geared to assessing and visualizing the trends, outliers, and other patterns in social sentiment. You don’t need to retain every single thing that your customers put on social media to extract the core intelligence that you seek, as in the following questions: Do they like us? How intensely? Is their positive sentiment improving over time? In fact, doing so might be regarded as encroaching on privacy, so purging most of that data once you’ve gleaned the broader patterns is advised.

Fourth, even outright customer lies propagated through social media can be valuable intelligence if we vet and analyze each effectively. After all, it’s useful knowing whether people’s words—”we love your product”—match their intentions—”we have absolutely no plans to ever buy your product”—as revealed through their eventual behavior—for example, buying your competitor’s product instead.

If we stay hip to this quirk of human nature, we can apply the appropriate predictive weights to behavioral models that rely heavily on verbal evidence, such as tweets, logs of interactions with call-center agents, and responses to satisfaction surveys. I like to think of these weights as a truthiness metric, courtesy of Stephen Colbert.

What we can learn from social sentiment data of dubious quality is the situational contexts in which some customer segments are likely to be telling the truth about their deep intentions. We can also identify the channels in which they prefer to reveal those truths. This process helps determine which sources of customer sentiment data to prioritize and which to ignore in various application contexts.

Last but not least, apply only strong governance to data that has a material impact on how you engage with customers, remembering that social data rarely meets that criterion. Customer records contain the key that determines how you target pitches to them, how you bill them, where you ship their purchases, and so forth. For these purposes, the accuracy, currency, and completeness of customers’ names, addresses, billing information, and other profile data are far more important than what they tweeted about the salesclerk in your Poughkeepsie branch last Tuesday. If you screw up the customer records, the adverse consequences for all concerned are far worse than if you misconstrue their sentiment about your new product as slightly positive, when in fact it’s deeply negative.

However, if you greatly misinterpret an aggregated pattern of customer sentiment, the business risks can be considerable. Customers’ aggregate social data helps you compile a comprehensive portrait of the behavioral tendencies and predispositions of various population segments. This compilation is essential market research that helps gauge whether many high-stakes business initiatives are likely to succeed. For example, you don’t want to invest in an expensive promotional campaign if your target demographic isn’t likely to back up their half-hearted statement that your new product is “interesting” by whipping out their wallets at the point of sale.

The extent to which you can speak about the quality of social sentiment data all comes down to relevance. Sentiment data is good only if it is relevant to some business initiative, such as marketing campaign planning or brand monitoring. It is also useful only if it gives you an acceptable picture of how customers are feeling and how they might behave under various future scenarios. Relevance means having sufficient customer sentiment intelligence, in spite of underlying data quality issues, to support whatever business challenge confronts you.

Niaz: How do you see data science evolving in the near future?

James: In the near future, many business analysts will enroll in data science training curricula to beef up their statistical analysis and modeling skills in order to stay relevant in this new age.

However, they will confront a formidable learning curve. To be an effective, well-rounded data scientist, you will need a degree, or something substantially like it, to prove you’re committed to this career. You will need to submit yourself to a structured curriculum to certify you’ve spent the time, money and midnight oil necessary for mastering this demanding discipline.

Sure, there are run-of-the-mill degrees in data-science-related fields, and then there are uppercase, boldface, bragging-rights “DEGREES.” To some extent, it matters whether you get that old data-science sheepskin from a traditional university vs. an online school vs. a vendor-sponsored learning program. And it matters whether you only logged a year in the classroom vs. sacrificed a considerable portion of your life reaching for the golden ring of a Ph.D. And it certainly matters whether you simply skimmed the surface of old-school data science vs. pursued a deep specialization in a leading-edge advanced analytic discipline.

But what matters most to modern business isn’t that every data scientist has a big honking doctorate. What matters most is that a substantial body of personnel has a common grounding in core curriculum of skills, tools and approaches. Ideally, you want to build a team where diverse specialists with a shared foundation can collaborate productively.

Big data initiatives thrive if all data scientists have been trained and certified on a curriculum with the following foundation: paradigms and practices, algorithms and modeling, tools and platforms, and applications and outcomes.

Classroom instruction is important, but a data-science curriculum that is 100 percent devoted to reading books, taking tests and sitting through lectures is insufficient. Hands-on laboratory work is paramount for a truly well-rounded data scientist. Make sure that your data scientists acquire certifications and degrees that reflect them actually developing statistical models that use real data and address substantive business issues.

A business-oriented data-science curriculum should produce expert developers of statistical and predictive models. It should not degenerate into a program that produces analytics geeks with heads stuffed with theory but whose diplomas are only fit for hanging on the wall.

Niaz: We have already seen the huge implication and remarkable results of Big Data from tech giants. Do you think Big Data can also have great role in solving social problems? Can we measure and connect all of our big and important social problems and design the sustainable solutions with the help of Big Data?

James: Of course. Big Data is already being used worldwide to address the most pressing problems confronting humanity on this planet. In terms of “measuring and connecting all our big and important social problems and designing sustainable solutions,” that’s a matter for collective human ingenuity. Big Data is a tool, not panacea.

Niaz: Can you please tell us about ‘Open Source Analytics’ for Big Data? What are the initiatives regarding open source that IBM’s Big Data group and others group (startups) have done or are planning?

James: The principal open-source community in the big data analytics industry are Apache Hadoop and R. IBM is an avid participant in both communities, and has incorporated these technologies into our solution portfolio.

Niaz: What are some of the concerns (privacy, security, regulation) that you think can dampen the promise of Big Data?

James: You’ve named three of them. Overall, businesses should embrace the concept of “privacy by design” – a systematic approach that takes privacy into account from the start – instead of trying to add protection after the fact. In addition, the sheer complexity of the technology and the learning curve of the technologies are a barrier to realizing their full promise. All of these factors introduce time, cost, and risk into the Big Data ROI equation.

Niaz: What are the new technologies you are mostly passionate about? What are going to be the next big things?

James: Where to start? I prefer that your readers follow my IBM Big Data Hub blog to see the latest things I’m passionate about.

Niaz: Last but not least, what are you advices for Big Data startups and for the people those who are working with Big Data?

James: Find your niche in the Big Data analytics industry ecosystem, go deep, and deliver innovation. It’s a big, growing, exciting industry. Brace yourself for constant change. Be prepared to learn (and unlearn) something new every day.

Niaz: Dear James, thank you very much for your invaluable time and also for sharing us your incredible ideas, insights, knowledge and experiences. We are wishing you very good luck for all of your upcoming great endeavors.

_  _  _  _  ___  _  _  _  _

Further Reading:

1. Viktor Mayer-Schönberger on Big Data Revolution

2. Gerd Leonhard on Big Data and the Future of Media, Marketing and Technology

3. Ely Kahn on Big Data, Startup and Entrepreneurship

4. Brian Keegan on Big Data

5. danah boyd on Future of Technology and Social Media

6. Irving Wladawsky-Berger on Evolution of Technology and Innovation

7. Horace Dediu on Asymco, Apple and Future of Computing

8. James Allworth on Disruptive Innovation

Peter Weijmarshausen: 3D Printing

Editor’s Note: Peter Weijmarshausen is a pioneer of 3D Printing. He is passionate to make new and exciting technology accessible for everyone. He is Co-Founder and CEO of Shapeways, the leading 3D printing marketplace and community that helps people make, buy and sell anything they want. Shapeways started in the Philips Lifestyle Incubator in the Netherlands in 2007, and spun off as an independent company in 2010. The company is headquartered in New York, with offices in Eindhoven and Seattle. You can read his full bio from here.

eTalk’s Niaz Uddin has interviewed Peter Weijmarshausen recently to gain his ideas and insights on 3D Printing which is given below.

Niaz: Peter, thank you so much for joining us. We are thrilled to have you at eTalks.

Peter: It’s my pleasure to be here Niaz.

Niaz: You have been working with 3D printing for long time. You have co-founded ‘Shapeways’, the leading 3D printing marketplace and community. And now working as the CEO of ‘Shapeways’. Can you please give us a brief of the evolution of 3D printing?

Peter: I’ve been working with 3D Printing for quite some time now. Prior to Shapeways, I worked for a company that published the first free 3D software, called Blender.

3D Printing has been around for awhile. At the time when Shapeways was founded (in 2007), 3D Printing was still very expensive and used primarily on rapid prototyping. People were using 3D software but thought it was impossible to hold their designs in their hands.  By 2008, we launched Shapeways.com and started 3D Printing the impossible. In 2010, we spun out of Philips and moved headquarters to New York.

Niaz: Do you think the average person should care about 3D printing and why?

Peter: Definitely. 3D Printing is revolutionizing the way consumers think about products. Currently, we settle for store bought products. With 3D Printing you can customize products to your exact need.

Niaz: What are some of the current applications of 3D Printing?

Peter: There are a ton of applications for 3D Printing. At Shapeways, we have a very diverse community: we see a lot of hobbyists using Shapeways to create custom products to fit their various hobbies, as well as jewelry designers using Shapeways to create beautiful pieces. There are also a host of companies using 3D Printing to fuel innovation in various fields, such as the medical industry.

Niaz: What are the primary issues 3D Printers still need to overcome?

Peter: Learning how to 3D Model is still quite hard. This being said, we’re working to lower the barrier to entry so that anyone can create real-life products from digital 3D files. We just launched a new API that allows developers to easily create applications that make printable objects!

Niaz: Do you think we can literally make everything with 3D printing?

Peter: Currently, we can’t make everything using 3D Printing. For example, we still can’t 3D Print Electronics.

Niaz: Will we be able make everything with 3D printing in near future?

Peter: I don’t see why not.

Niaz: Those who don’t know about ‘Shapeways’, can you please give a brief of your company?

Peter: On Shapeways, individuals can make, buy and sell their own products. We 3D print everything on- demand, which means that every order is customized and personalized. By providing a platform for our community members to share ideas and gain access to cutting edge technology, we’re bringing personalized production to everyone.

Niaz: Do you have any estimation of the numbers of products that you have already made at ‘Shapeways’?

Peter: We currently have over 250,000+ community members in over 130 countries and have printed well over 1,000,000 products to date. These numbers continue to grow at a faster rate.

Niaz: What are the most exciting products that ‘Shapeways’ community has created?

Peter: We see so many exciting, amazing products created daily. One of my favorites is the Strandbeest, it has over 90 moving parts and requires no assembly!

Niaz: What are the responses from customers?

Peter: Our community is incredibly grateful for the service we provide. We often receive emails and blog posts thanking us!

Niaz: Any negative feedback?

Peter: As with any company that supplies physical products, we see some customer complaints but our customer service team is well equipped to handle .

Niaz: What does Shapeways have planned for 2013?

Peter: We’re currently building out our factory in Long Island City! Once fully built out we’ll have 30-50 3D printers in LIC capable of printing 3-5 million parts a year. It’s ambitious but it’s possible and we can’t wait to see the factory come to life.

Niaz: Wow! That’s really impressive. Where do you see the 3D Printing industry going over the next 5 years?

Peter: We will see products emerge that we’ve never imagined before – mind blowing shapes and solutions. I envision Shapeways continuing to grow in both employee number and locations. I can’t wait to see what will happen in the next five years.

Niaz: Peter, thank you so much for giving us time in the midst of your busy schedule. I am wishing you good luck for the New Factory as well as for all exciting things that you are doing in 3D Printing Industry.

Peter: You’re welcome Niaz.

_  _  _  _  ___  _  _  _  _

Further Reading:

1. Viktor Mayer-Schönberger on Big Data Revolution

2. Gerd Leonhard on Big Data and the Future of Media, Marketing and Technology

3. Ely Kahn on Big Data, Startup and Entrepreneurship

4. Brian Keegan on Big Data

5. danah boyd on Future of Technology and Social Media

6. Irving Wladawsky-Berger on Evolution of Technology and Innovation

7. Horace Dediu on Asymco, Apple and Future of Computing

8. James Allworth on Disruptive Innovation

Irving Wladawsky-Berger: Evolution of Technology and Innovation

Editor’s Note: Dr. Irving Wladawsky-Berger retired from IBM on May 31, 2007 after 37 years with the company. As Chairman Emeritus, IBM Academy of Technology, he continues to participate in a number of IBM’s technical strategy and innovation initiatives. He is also Visiting Professor of Engineering Systems at MIT, where he is involved in multi-disciplinary research and teaching activities focused on how information technologies are helping transform business organizations and the institutions of society. You can read his full bio from here.

eTalk’s Niaz Uddin has interviewed Irving Wladawsky-Berger recently to gain insights about the evolution of Technology and Innovation which is given below.

Niaz: Dear Irving, thank you so much for joining us.  We are thrilled and honored to have you for eTalks .

Irving Wladawsky-Berger: Niaz, thank you for having me.

Niaz: You began your career in IBM as a researcher in 1970. You have retired from IBM on May 31, 2007 as a Vice President of Technical Strategy and Innovation. From the dawn of Supercomputing to the rise of Linux and Open Source, the Internet, Cloud Computing, Disruptive Innovation, Big Data and Smarter Planet; you have been involved with it all.  You have worked for 37 years for bringing sustainable technological innovations for IBM. Can you please give us a brief of the evolution of technology and innovation? What do you think about the technological trend that has been changing since you have joined in IBM?

Irving Wladawsky-Berger: Well,It has been changed radically since the time I started in 1970 until now, let say, after 30 years. At the time in 1970, there were no personal computers and needless to say there was no internet. Computers were expensive and people were able to use them in a time sharing mode. Usually you would be needed a contract to be able to operate a computer and it was relatively expensive at that time. So most of the innovation and research had to be done in a kind of big science lab environment, whether it’s at a university like MIT or an R&D lab in IBM. Now all that began to change when personal computers emerged in the 1980s and especially in the next decade in 1990s, because personal computers became much more powerful and much less expensive. And then we had the internet. Remember the internet was only really blocking to the world in the mid 90s. And all of a sudden, it was much easier for lots of people to have access to the proper technologies and to start doing all kind of entrepreneurial innovations. Before that it was very expensive and then with the internet they were able to distribute their offerings online directly to their customers. Previously, they needed distributor channels and it did cost a lot of money. That has changed even more in just the last few years because of the advent of Cloud Computing. People started to do entrepreneurial business. They don’t even need to buy computer equipment anymore. They have a laptop or a smart phone that they use to get access in the cloud. As a result the cost of operating business is getting lower. This is particularly important for emerging economy like India, Africa or Latin America. Because they don’t have that much access to capital as we do here in the United States. So the availability of the internet, cloud computing and mobile devices etc. is going to have a huge impact for entrepreneurialism especially in emerging economy.

Niaz: So what has surprised you most about the rise and spread of the internet over the past 15 years?

Irving Wladawsky-Berger: Wellyouknowwhen I started, before the mid 90s, I was very involved with the Internet but as part of supercomputing before then the internet was primarily used in research lab and universities. And it all started to change with the advent of World Wide Web as well as Web Browser.  It made everything much more accessible. It was so easier to use. Before browsers, it was primarily interfaced that engineers had to learn to use. It wasn’t really available to the majority of people. The internet probably like other disruptive technologies; we knew it was exciting, we knew some good things could happen. But most of us couldn’t anticipate how transformative it would become. As an example, the fact that it would so much transform the media industry,  the music industry, newspapers, video streaming etc. On the other side, some of distinct people were predicting of the internet in the near term, like ‘it would totally transform the economy. You don’t need revenue and cash anymore’. That was wrong. So some of the predictions were just wrong, just like ‘you don’t need revenue and cash anymore’. Because if you are running a business you need revenue, cash and profit. Some of the predictions have been taking a lot longer than people thought in the early days because you needed broadband and things like that. And then other changes happened faster than any of us anticipated. In just an interesting experience, to watch how unpredictable disruptive technologies are.

Niaz: Now what do you think about the future of internet? What significant changes are going to occur in near future?

Irving Wladawsky-Berger: First of all, I think broadband will keep advancing. And that’s being one of the most important changes. When I started using internet in the mid 90s, it was 16kb over a dial modem. Then few years later, it only went to 64kb over dial modem and then broadband came in. And it is getting better and better and better. Now in some countries, as you know, like South Korea, is extremely fast. And I think in US we don’t have that good broadband yet. But it is good to see it continues to be better.  Broadband wireless has come along. And that is very nice. I think the rise of mobile devices like Smart phones in the last few years, has the most important ways of accessing internet. And it has been an absolute phenomenon. And absolute phenomenon.  When the internet first showed off in the mid 90s, we were very worried that the internet was growing you needed to be able to have a PC and in those days time PCs were not that much inexpensive. You needed an internet service provider. That was not inexpensive either. So there was a strong digital divide even with the advanced economy like USA. I remember having a number of important meetings, while I was working in Washington in those days on the digital divide. All that had disappeared as you know mobile devices are so inexpensive. Just about everybody can afford it now.  But not all mobile devices are smart phones yet capable of accessing the internet. And I believe within few years, just about everybody in the world will be able to access the information, resource and application. That is going to be gigantic.  Finally, internet, broadband, cloud computing and disruptive innovations are going to bring changes that will be the most important change over the next few decades.

Niaz: As you know, Big Data has become a hot topic of tech industry. What do you think about Big Data?

Irving Wladawsky-Berger: Big Data is very interesting. And what it means is that we now have access to huge amount of real time data that can be totally analyzed and interpreted to give deep insight. Now I am involved with a new initiative of New York University called Center for Urban Science and Progress. A lot of the promise is to gather lot of information about transportation, energy uses, health and lots of other real time information in the city and being able to use it effectively to better manage the city and to make it more efficient. So now, we have access to big amount of data. But being able to manage those data, being able to run experiments and being able to make sense of data, you need to model. You need a hypothesis that you embedded in a model. Then you test your model against your data to see your model is true or not. If your model is true then the prediction you are making is correct. And if your model is not true, the predictions you are making is incorrect. Like for an example, you can get lots of health care data. But for finding the meaning, using those data efficiently, you have to have a good model. So in my mind big data is very important but more important which I called Data Science. Data Science is the ability to write model to use the data and get inside from what the data is telling and then put it into practice. And the data science is very new even big data itself is very new.  I think that it shows tremendous promise but we now have to build the next layers of data science in the discipline and that will be done discipline by discipline.

Niaz: Over the past twenty years you have been involved in a number of initiatives dealing with disruptive innovations. What do you think about disruptive innovation?

Irving Wladawsky-Berger: I think that the work of Clayton Christensen has been really excellent. People knew that there were disruptive technologies that may change but until Clay wrote his book Innovators Dilemma and I think his next book ‘Innovators Solution’ is even better. I use these books in the graduate course at MIT. These are two excellent books on innovation. People didn’t understand for example why it is so tough to manage disruptive innovation? How is it different from the regular sustaining innovation or incrementing innovation? What do the companies should do with sustaining or incrementing innovation vs. disruptive innovation? And so he framed it in an excellent way to show the differences and to provide the guidelines for companies what they should do and that what they should watch out for. I think he wrote ‘Innovators Dilemma’ around 1990s. Now even today, the reality is, many companies don’t appreciate how difficult it is to truly embrace disruptive innovation. If you go and ask companies about disruptive innovation, they would say they are doing disruptive innovation. But in reality they are just working with incrementing innovation.  But to really be embarrassing disruptive, it’s till culturally very difficult for many companies.

Niaz: What is cloud computing? What are the ideas behind cloud computing?

Irving Wladawsky-Berger: There are many definitions of cloud computing. There is no one definition. I think the reason is that cloud computing is not any one thing. I think that it’s really a new model of computing where the internet is the platform for that computing model. If you look at the history of computing, in the first phase, we had the central computing model and the mainframes in the data center were the main platform of that model. That model lasted from the beginning of the computing industry until let say mid 80s. Then the client server model came.  And in the client server model, the PCs were the central platform of that model. Now cloud computing is a model and it’s totally organized around the internet and it’s totally organized to make it possible to access hardware resources, storage resources, middleware resources, application resources and services over the internet . So cloud computing, when you think about it, the actual computer is totally distributed over the internet in the cloud.  Finally cloud computing is the most interesting model of computing built totally around the internet.

Niaz: How much disruption does cloud computing represent when compared with the Internet?

Irving Wladawsky-Berger: I think cloud is the evolution of the internet. I think cloud computing is a massive disruption. And it is a very big disruptive part of the internet, because it’s totally changing the way people can get access to application and to information. Instead of having them in your PC or in the computers in your firm, you can now easily get whatever you want from the cloud. And you can get it in much standardize ways. So cloud makes it much easier and much less expensive for everybody whether you are a big company or whether you are a small or medium size company or whether you are an individual to get access to very sophisticated applications. And you don’t have to know everything. Remember in the PC days, if you bought an application, you got a disk, you had to load it, then there were new versions and you had to manage those versions by yourself. It was such an advance way over the previous worlds. Everybody was happy. But it was very difficult to use. Cloud as you know the whole world of apps. If you need apps, you can go to apps store. And an app store is basically a cloud store. So you can easily get whatever you need from the app store. When an app has a new release it will tell you. You don’t have to know everything. You have to do anything. It all being engineered and that is making IT capabilities available to many more companies and people. So it’s very disruptive.

Niaz: What do you think about the future of startups which are competing with giants like IBM, Google, Amazon, Facebook?

Irving Wladawsky-Berger: That’s the history of the industry. You know, in the 80s, people said how anybody competes with IBM as IBM is such a big and powerful company. And the few years later, IBM was almost died because client server computing came in and all these companies like Sun Microsystems, Microsoft, Compaq; they almost killed IBM. And locally for me who was there it didn’t die. Then in 90s, you could say, how can anybody compete with Microsoft after windows came up, it was so powerful, it was everything. Google was nothing at the beginning. And here we are now. Every few years we ask this question, here is the most powerful company of the world and what can possibly happen to them?  And you know sometimes nothing happens to them. And they continue being more powerful. Sometimes, in the case of IBM, they reinvent themselves. And they stay very relevant. They are just no longer the most advanced company in the world, they are an important company. But In 70s and 80s it was the leader in the computing industry. I think many people wouldn’t say about IBM now. For competing and surviving in any industry you have to have a very good business model. And for entrepreneurial innovation, coming up with a great business model is the hardest and core challenge.

Niaz: Can you please tell us something about the ways of asking BIG questions to challenge the tradition and come up with disruptive innovation?

Irving Wladawsky-Berger: Niaz, you are asking a very good question because asking big questions, coming with new business idea or business model is very difficult. I would say, in the old days, lot of the ideas came from laboratory if I talk about IT industry. Today, the core of innovation is in the market place. How can you come up with a great new application or a great new solution that will find a market that will find customers who want it. You have to be much focused. You have to have some good ideas. You have to study the market. You have to understand who are likely to be your customers. You have to know who your competitors are going to be. If those competitors are going to be big like Google, Microsoft, Facebook, you have to know, if you are starting a new company, what do you have unique over those companies. But I think that in general the inspiration or new ideas is a combination of creativity and market place. You have to look at the market place and have to be inspired by marketplace. Here are some great ideas you have and bring light. I think I couldn’t able to give good answer. You are asking like ‘Where the great business ideas come from’. It’s like asking movie directors or composers, where do you get your creativity. It’s a similar question. There is no good answer to that.

Niaz: Thank you Irving. I am wishing you very good luck for your good health and all future projects.

Irving Wladawsky-Berger: You are welcome. It was very nice talking to you. And good luck to you Niaz.

_  _  _  _  ___  _  _  _  _

Further Reading:

1. Viktor Mayer-Schönberger on Big Data Revolution

2. Gerd Leonhard on Big Data and the Future of Media, Marketing and Technology

3. Ely Kahn on Big Data, Startup and Entrepreneurship

4. Brian Keegan on Big Data

5. danah boyd on Future of Technology and Social Media

6. James Allworth on Disruptive Innovation

7. Horace Dediu on Asymco, Apple and Future of Computing

James Allworth: Disruptive Innovation

Editor’s Note: James Allworth is the director of strategy at Medallia. He is a Harvard Business School graduate. Previously he worked for ‘Apple’, ‘Booz & Company and co-authored New York Times best seller  ‘How will you measure your life. He is a writer at Harvard Business Review and a fellow of Professor Clay Christensens think-tank on Innovation. His work has been featured on bloomberg, business insider and reuters.

eTalk’s Niaz Uddin has interviewed James Allworth recently to gain his ideas and insights about Disruptive Innovation which is given below.

Niaz: James, you have been working with the father of disruptive innovation, Clay Christensen’s, for long time. You have done so many works in the field of Disruptive Innovation. Can you please give us a brief description of disruptive Innovation?

James: Disruptive innovation is the process by which novel technologies or business models — often times, vastly inferior to the existing solution — start at the bottom of the market, and by gradually getting better, move to replace the existing solution. Professor Christensen first identified the phenomenon when studying the disk drive industry; but it applies widely. Generally, the competitors start off being considered little more than “toys”, but by being vastly more accessible (both in terms of price and in terms of necessary expertise) they slowly move upmarket and take over the market. Once you understand how it happens, you’ll see it all over the place.

Niaz: What are the major examples of Disruptive Innovation to you?

James: There are examples everywhere. One of my favorite industries to look at is the computing industry. We started with mainframes, they were displaced by minicomputers, which in turn was displaced by the personal computer, then the laptop, and now the PC and laptop is being threatened by tablets and smartphones. In each case, the disruptive entrant had lower performance than the previous solution; often they were cheaper, too.

What’s really fascinating is that industries that have previously been immune to disruption are staring down the barrel of it right now. The internet is enabling all this to happen — whether it be Netflix threatening cable; or Uber threatening entrenched taxi monopolies; or Airbnb going after the hotels.

Niaz: So now, if we would like to differentiate innovation and disruptive innovation, what will be the core basis?

James: The performance of the solution is generally inferior to what was available previously, but it’s cheaper and more accessible. The array of programming options on cable, for instance, is vastly greater than Netflix. But Netflix is much cheaper. Hotels compete on the quality of the appointments and amenities; Airbnb is unlikely to be able to beat that head on, but by leveraging the internet and utilizing what would otherwise go to waste (people’s rooms) then they’re able to compete on a different axis of performance.

What’s also noticeable about disruptive innovation is that it’s rarely just technical innovation that drives it, but also business model innovation. Professor Christensen and Max Wessel wrote touched on this in their recent HBR article, on surviving disruption. There’s an “extendable core” in disruptors that enable them to topple the incumbents.

Niaz: Is it possible to disrupt Google? How?

James: Well, those are very big questions.

Google is interesting because it’s made its fortune disrupting others. But in becoming a big organization, it has created an Achilles heel just like any other big organization has — in its case; it has a very big addiction to advertising revenue. A French ISP just built ad blocking into its service by default — now, it looks like they have subsequently backed down (example here), but something like that becoming commonplace would make life very difficult for Google.

Niaz: As you know YouTube has been a great revolution. It has been changing the way we create and share art. Do you see any disruption in the way we create art? Will it be a concept like ‘Disruptive Art’

James: The wonderful thing about YouTube is that it’s created a publishing platform that anyone can gain access to, and you don’t need a lot of resources to do so. It’s enabled people to reach an audience they otherwise could not. You don’t need to have a deal with a big media company to create a movie or even a TV series now and get it published; artists and regular folks are now able to create a relationship directly with their fans. It’s this ability for the creator to get directly in touch with the fan/consumer that is what is so cool about YouTube and its ilk.

You’re already starting to see artists experiment with new business models that leverage this.

Niaz: Till today, technology and innovation mostly belong to Silicon Valley? What do you think about the core challenges for developing countries and their organizations to be innovative? How can they come up with disruptive innovative ideas, make things happen and sustain in the long run?

James: Disruption often starts out where there is non-consumption — where people can’t afford the existing solution. That means that emerging markets are going to be hotbeds of activity for disruptive innovation. You’re already starting to see this happen, with the $20 tablet from India for example: (click)

Niaz: Finally, our readers will love to know about your amazing book ‘How will you measure your life’. Can you please give us a brief of this life changing book?

James: The book is based on Professor Christensen’s class at Harvard Business School, using the theory to answer the big questions you really need to be asking about your life and your career. At no point do we claim to have the answers; it’s going to be different for everyone, so instead, we use the business theory to help equip readers with the tools required to find the answers for themselves.

We managed to make the New York Times best seller list, which we’ve just been humbled by. If your readers are interested in finding out more, details are up on the website (here), including a free excerpt.

Niaz: James, thank you so much for your time. I am wishing you very good luck for everything you do.

James: Thanks Niaz, and all the best!

_  _  _  _  ___  _  _  _  _

Further Reading:

1. Horace Dediu on Asymco, Apple and Future of Computing

2. Viktor Mayer-Schönberger on Big Data Revolution

3. Gerd Leonhard on Big Data and the Future of Media, Marketing and Technology

4. Brian Keegan on Big Data

5. Irving Wladawsky-Berger on Evolution of Technology and Innovation

6. Ely Kahn on Big Data, Startup and Entrepreneurship

7. danah boyd on Future of Technology and Social Media