You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Death by social media: thoughts on the emerging social challenge.

ø

Rumors have always spread since the beginning of time, sometimes with disastrous consequences for society. They are harmful because there is no way good way to fight them. Yet they can be everywhere. The more vicious they are, the more they are repeated and believed, and targets can suffer irreparable harm from them. Yet modern day social media platforms have multiplied the risk. Whereas in the past, a rumor limped its way around a community, today, it spreads far and wide in a matter of seconds. Platforms like Facebook and WhatsApp are not inherently bad, but they have accentuated the toxic rumors risk. In India, this phenomenon has evolved sometimes with tragic consequences which warrant government intervention in the interest of public safety.

Recurring tragedies
The following cases provide examples of this challenge:

  • In 2017, a message falsely alleging child abductors, speaking  Hindi, Bangla and Malyali were on the prawl, was circulated on Whatsapp, leading to the lynching of innocent people in Jharkhand.
  • In May 2018, a 55-year-old woman was killed by a mob in southern Indian state of Tamil Nadu, amid false Whats App messages and rumors over kidnapping of children.
  • In June 2018, two men that had stopped to ask for directions were beaten to death by a large mob in north-eastern Assam state. They were mistaken for being child abductors following a message spread on Whatsapp leading unsuspecting villagers into believing that their children may be targeted by abduction gangs.

A significant adaptive challenge

It may very well be that mischief-makers bent on causing community alarm and despondency spread these messages, with tragic consequences including disturbing the peace. We cannot however, just pin the blame on the technology, because that assumes people were good prior to the advent  of social platforms.

What is clear is that there is no technical fix to this challenge of vigilante justice. There is neither a silver bullet to it, nor a known answer to the challenge. This situation is a typical adaptive challenge, which has no easy answers. Nonetheless, the remedies to this situation can be found by analysing the key stakeholders to the challenge. These stakeholders can be mapped as follows:

The stakeholders to these adaptive challenges can can be managed as follows:

  • Facebook and Whats App
    With over half a billion internet users in India, according to the Washington Post, both platforms have a significant customer base. Facebook has over 200 million users while WhatsApp has over 270 million users. Since Whats App is owned by Facebook, we can argue that the same company has nearly the entire population on the Internet in India on their platforms. Because India contributes almost a quarter of Facebook 2.27 billion users, it is important that Facebook understands its responsibilities to public safety in a market it has a significant stake in. To that extend, engaging both Facebook and Whats App ought to focus on:
    – Engaging both platforms, which derive significant value from India, to send messages to their users warning of the spread of such toxic rumours on their platforms, especially once alerted by either the public or authorities.
    – Request users to flag or alert them within those platforms where a suspiciously toxic and dangerous message or video if being circulated. This helps to crowd-source risk management or flagging of such messages as it may not be possible for the platforms to check all messages/content users circulate. This is especially important for Whats App which does not currently have such a feature.
    – Engage Facebook to prioritise posts by professional media which usually flag such rumors when they start circulating. This is especially important because Facebook in March 2018 presented a separate newsfeed on their platform that prioritises content from family and friends while hiding away posts by professional news organisations. This has the effect of promoting and amplifying such rumors among friends and family making professional news debunking of such rumors less prominent.
    – Nudge Facebook and Whatsapp to engage local language moderators who understand local nuances to sift through content and moderate it. This would be complicated for Whatsapp that encrypts its messages, but this is where they need to crowdsource this risk by implementing a reporting function.
    – Push both platforms to set up a local corporate entity as well as appoint a grievance officer to manage and help curb the spread of rumors that have claimed several lives.
  • Local Community/Village Leaders
    For local communities, the fundamental issue is educate them to promote community cohesion and peace. For example, once the rumors start spreading among the community, the leaders in those communities can flag the issues with authorities to ensure that law enforcement kicks in. This issue goes beyond the social media platforms, it is about promoting tranquility among people in local communities.Dealing with the community entails engaging community leaders. For this to be successful, the ministry can arrange training, for example via seminars, for both law enforcement officers and community leaders in order to curb vigilante justice.In addition, there is no point in wasting a crisis. The ministry should coordinate with the justice ministry to ensure that the law is not just enforced, but perfect examples are set for perpetrators of vigilante justice.
  • Technology Ministry/State Government
    The ministry and state government both play a role in ensuring that the above stakeholders play their part, and that education programs are implemented. Evolving adaptive challenges like these require leadership, and its important for the ministry to lead these efforts, including engaging all stakeholders. In addition, the ministry can also try the following:
    -Holding social media groups administrators responsible. This is particularly important for WhatsApp where toxic messages can be circulated/broadcast rapidly within groups. Holding administrators within groups can help push for responsible actions within groups to make sure that someone is accountable.
    – In times of emergency situations where there is a severe breach of the peace or loss of lives due to nasty fall-outs arising from these rumours, the ministry can try, as a last resort to temporarily switch of the Internet access to the two social media platforms via local internet service providers. While turning off the Internet can be a plan B in times of crisis, it is not a solution to the spread of hate rumors. It simply slows down the pace. What is important is to work towards building social cohesion. Neither Facebook nor Whatsapp created the problems of distrust in the communities.
  • Local Media/Radio Stations
    Professional media can be mobilised too to help correct and counter toxic rumors that have had tragic consequences. Radio would be most effective where it has coverage as the message can be broadcast to a broad audience. 

A double edged-sword.
Technology has brought with it massive advantages to information dissemination. Platforms like Facebook have democratized communication tools by enabling anyone with a smartphone the ability to broadcast. However, these are just tools, and they can be used to spread constructive messages, or hate. Just like in the past, traditional mass media has been used to mobilize mass violence. The genocide in Rwanda in the 90s provides an informative case study. We cannot wait for tragedies and genocides to occur before we move in to manage these new e-platforms. They must be responsible, and must be held responsible as they can supercharge content that ratchets up tribal and religious hate which can upset fragile social balances in communities. 

Who is using who? Data privacy concerns and the data industrial complex

ø

Giving a keynote at a conference in London this week, Microsoft CEO Satya Nadella added his voice to the discourse about data privacy, advocating for its recognition as a human right. He was speaking in support of Europe’s General Data Protection Regulations (GDPR). Nadella is not the only tech titan supporting stringent data privacy. Apple’s CEO, Tim Cook, recently pushed for the same. These voices, spurred by various issues in recent years, have brought issues of data privacy and protection of personal information to the fore. This triggers thought-provoking questions such as, should data privacy be a human right?

This got me thinking about two services that I use, Google and Facebook, which track me. I kicked into self-reflection mode, asking myself the following questions :

• Why would they need my data?
• What happens when they collect my data?
• How much power do the service providers amass by collecting my data and aggregating it with other data?
• Why have I been alright with this scenario?
• What should I do about it?

What is data Privacy?
Data privacy has become topical in recent times. It is about the ability of individuals or entities determining what sensitive information or personally identifiable information1 (PII) in a computer system can be shared with third parties. Due to increasing debate around these issues, the standards and expectations have evolved. On the other hand, it seems regulations in different jurisdictions have gradually changed, and Europe’s GDPR has set the regulation standards very high, which is why leaders in the tech industry have come out to praise such regulations.

Why is data privacy important?
A broad number of dynamics have brought this issue to the forefront of technology debates. Issues such as the recent class action lawsuit against Google and Facebook2 for deceptively and secretly tracking users’ locations and collecting their data, even when users were led to believe that they had switched off such tracking, as well as the recent Cambridge Analytica scandal that affected the US 2016 elections have increased the temperatures. Key questions over data privacy include collecting and sharing personal data with third parties without consent as well as whether third parties can track websites visited by a user.

While services such as Google and Facebook generally encrypt user information, the greatest concern is not whether PII is un-encrypted and therefore vulnerable, its what these platforms with unlimited amounts of personal data, with or without the user knowing, do with the data they collect. At a conference in Brussels last month, Cook took a indirect dig at the two companies, referring to them as the data-industrial complex that takes user data information and getting it “weaponized against us with military efficiency”. He further warned against the sugarcoating of the consequences, adding that such surveillance “and these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable. It should unsettle us.” Cook’s words, albeit strong, warn us of an unsettling development of mass surveillance (tracking), stockpiling (storage and manipulation), and weaponization (abuse and manipulation) of PII, very key issues underpinning the debate on data privacy. But the issue goes beyond these three issues, it speaks to how these companies deemed part of the data industrial complex can become extremely powerful, without being answerable to anyone, giving rise to calls for regulation.

What happens to our data?
As I use Facebook and Google services, the firms collect my data. The question is, what data do they collect and what do they do with it? Among other personal information, Google collects the following data on me:

• It tracks where I have been. It stores details about my  location as long as I have my phone running, if I inadvertently turn location tracking. (Its possible to see a whole timeline of where I have been from the point I started using Google on my device. This information can be found here.
• It keeps a advertisement profile on me. This profile is created using my information like possible income, weight, gender, age, relationship status and other personal details to create a customized advertisement profile based on my data. You can find your information here.
• It keeps a record of everything I have ever searched, including what I deleted. This search history is kept across all my devices, meaning that even if I deleted my search history on one device, it still remains accessible on other devices. You can find your information here.
• It keeps all video viewing history on YouTube. From this, it can figure out my political earnings, my religion, my mood among other issues.

On the other hand, Facebook stores among other details, the following information:

• All data on things that I like and find interesting. This is based on my “like” clicks on its website, as well as what my friends like about me.
• All my geographical locations based on my log-in positions, the times I logged in and the device I used.
• All applications that have been connected to my Facebook account. This enables them to estimate and profile my preferences and interests.

Both Google and Facebook store massive amounts of my data extracted and archived by their algorithms. For example, Facebook keeps data enough to fill up hundreds of thousands word documents, including all of my activity on its website. This information can be found here. On the other hand, Google stores data large enough to fill hundreds of thousands of Word documents if printed. This information can be downloaded here.
This means that gaining access to anyone’s account gives access to all information on anything and everything there is to know about the user, including calendar data diarizing every aspect of the user’s life.

Googles says it collects all information on me to better deliver better services, including providing me more tailored advertisements, never mind that I don’t really need the advertisements. Yet questions arise from how it uses this information it collects. Most likely, it could use it for services that border on manipulation, that if packaged with advanced artificial intelligence systems and machine learning will deliver highly personalized services in the form of digital agents such as an advanced Google Assistant service. On the other hand, Facebook doesn’t directly sell my data, it sells access to me as all my data is used to create targeted advertising. The Cambridge Analytica scandal, however, exposed the unapproved mining of customer data for political manipulation purposes. Whichever way these companies use our data, nefarious or otherwise, what should be of greater concern is that we all are vulnerable to exploitation in unlimited ways that can benefit governments, companies and other entities that have the money to buy access to us.

Is what they disclose as collecting the same as what they actually disclose?

Prior to the Cambridge Analytica scandal, Facebook enunciated a policy which did not bother many users. But when the Cambridge Analytica issues came to light, it raised the specter of far more use of data than was previously disclosed to users. It is therefore reasonable to ask if Google also actually collects far more data than its disclosure policy lets on. Whatever the case, its important to keep guard of what information we let the companies collect. The best way to do that is to limit the amount of tracking and access requested by the services via the devices we use. In general, it is much easier to let go of Facebook’s services than it is to not use a variety of Google’s applications. That said, it is more regulation designed to diminish the significant power the data industrial complex has acquired that will protect users and keep their power in check. If necessary, the application of anti-trust laws to break up the tech giants may become an option on the table, and an exploration by Lina Khan of how to break up the monopoly of Amazon provides interesting direction.3

References
1.  Martin, K. E (2015). Ethical Issues in the Big Data Industry. MIS Quarterly Executive, June 2015 (14:2)

2. https://nakedsecurity.sophos.com/2018/10/25/google-and-facebook-accused-of-secretly-tracking-users-locations/ 

3. https://www.theatlantic.com/magazine/archive/2018/07/lina-khan-antitrust/561743/ 

Should HKS mandate stakeholders to use LastPass? A discussion on cyber-security.

2

In the 1987 sci-fi comic Spaceball, character Dark Helmet, after being told the lock combination to the air shield remarked, “So the combination is, one, two, three, four, five? That’s the stupidest combination I’ve ever heard in my life! That’s the kind of thing an idiot would have on his luggage!” Most web applications today would discourage users to have such a short password. Nonetheless, according to Security Magazine, the worst password in 2017, for the second year in a row, remained “123456”. Another interesting example is the inadvertent leak of his iCloud password by Kanye West when he unlocked his phone live on video, revealing his passcode online. This goes to show that there are many system users who are very sloppy when it comes to protecting data and restricting access to information, for various reasons. As such, they continue to use weak, easy-to-crack passwords to protect information online.

In general, a system is as strong as its weakest link. This is as true for general ICT users as it is for organisations. This means that a lot of online data is still very vulnerable to hacking and a lot of online systems are vulnerable to intrusion. It is therefore entirely possible for institutions, such as HKS and its stakeholder users to spend a lot of money on security systems and still not be truly secure, because of security vulnerabilities, especially at the human level. Moreso because, by its very nature, as host to strategic experts, former cabinet officials, top global security and international relations resource persons, and generally as a repository of groundbreaking knowhow, some of it proprietary or of a strategic nature, the school is a potential target for cyber attacks.

Because most users use multiple passwords to access different online resources, many end up using the same passwords across multiple platforms, in come cases simplifying them to easily remember them. To deal with this challenge, password managers can be used to store multiple passwords. Password managers keep login details for online applications or websites and help log into them automatically such that one does not need to remember all their passwords.  This is achieved by encrypting the multiple passwords with a master password so that all you need to remember is that master password. This is very convenient. Given this convenience, the question that arises is, should HKS make this mandatory for its system stakeholders?

Password management solutions have their own vulnerabilities,  depending on their engineering. As Schneier argues, despite possible flaws of password managers, they are still a convenient way of managing complex passwords, creating a trade-off with the reality that users sometimes use weak and vulnerable passwords.

Back to the question, should LastPass be mandatory or not? This question depends on the totality of defenses against cyberthreats such as hacking. In the case of HKS, there are multiple levels of (multi-layer)  security – the first being Harvard key, which is Harvard University’s unified user credential, that uniquely identifies users and provides them access to applications and services. The second layer is the mandatory two-factor authentication solution that requires the user to validate their access through verification via a second device. LastPass would therefore be another additional layer, more useful, especially for access to non-HKS online resources. On this basis, I argue that LastPass should be encouraged, but not mandated.

Yet there are additional reasons to not make Lastpass mandatory.

Mandating LastPass would amount to mandating a specific vendor solution, including the flaws that come with it.  LastPass, like other password managers, comes with its own vulnerabilities, which, even though they get patched from time to time, have been exploited by hackers. For example, in 2016, a hacker blogged about how he harvested LastPass passwords. The fact that they save users the headaches by helping them auto-log into accounts doesn’t mean they are no longer immune to security breaches.

Hacking of passwords is an adversarial act, which may be motivated by a variety of reasons such as curiosity, obsession, boredom, thrill-seeking, warfare, malice, revenge-seeking, pursuit of money, and self-promotion among various other motivations. Making one passoword management solution mandatory makes all users vulnerable to LastPass’s own technological weaknesses once an adversary identifies them.

In addition, besides truncating the boundaries between public and private spaces as all passwords for all sorts of applications are stored in the same solution, LastPass, like some of the password managers in its category also allows syncing across multiple devices, which amplifies the risk factor of attack via password syncing, as highlighted by Silver and others. Such synchronization opens up the risk of password extraction from multiple devices.

Cyber-security threats at HKS are potentially high. There are multiple security layers for protecting and restricting access to Harvard-specific resources. Users at HKS can use LastPass for managing passwords, for personal online access and HKS related access. However, the foregoing arguments show that though desirable, it is not necessary to mandate the use of LastPass as a password management solution at Harvard.

Government as a Platform: Rethinking government in Massachusetts.

ø

Towards a Massachusetts 2.0
Governments everywhere are faced with increased complexity in delivering a good standard of service in the context of global challenges such as natural disasters, economic turbulence, climate change, trade wars, energy shortages and demographic changes among other factors. This has a bearing on the federal authorities as much as it affects state governments and the state of Massachusetts is no exception. In light of these dynamics, harnessing technology to deliver services to increasingly discerning state citizens becomes a necessity for efficient service delivery. This memorandum explores what implementing government as a platform (GaaP) in Massachusetts entails, criteria for deploying services on the platform and governance model for managing the service. I shall call this Massachusetts 2.0.

GaaP: What it entails for the State of Massachusetts & the Criteria for Deployment.
GaaP is one way Massachusetts can gravitate towards an open government. Open government enables the government to co-innovate with citizens, enable mass collaboration and networks the government with other system-wide stakeholders, building trust in the process. The underlying principle and philosophy behind GaaP is that government information is a national asset and that the information as well as a services must be delivered to citizens when needed. As suggested by Tim O’Reilly (1) , GaaP enables the government to be a convener and enabler rather than the initiator of civic action.

To move towards GaaP, the state will have to do work on a number of issues. First it must establish a set of standards. These are a set of rules that help anyone to develop any programs and applications that communicate and cooperate with the state’s platform, Massachusetts 2.0. Second, everything must be centered on simplicity. It means Massachusetts 2.0 must be stripped of elaborate features to a core set of minimal services so that feature filled innovations are farmed out to private innovators. Third, the platform must be open by default. This means that it must be designed to enable participation by anyone who can access the public data and use it for public good in line with set standards. Fourth, the platform and the state must make provision for mistakes and errors as various stakeholders experiment and play around with the platform. This means that citizens must not be unduly punished for mistakes as they experiment with public data, as Massachusetts 2.0 evolves. Sixth, citizens should be allowed, whether private players or non-profits, to mine data to extract insights and innovate around it in more ways than the state government can imagine. Seventh, and last, we can lead by example, by using the same standards to start deploying some services to show how far other players can go. In other words, we will be creating a public, sophisticated and far more liberal and open version of Apple’s App Store, which allowed the state government to collaborate with citizens.

Deploying Massachusetts 2.0.
The first step is to be to develop a comprehensive set of standards. To allow collaboration of the state with citizens, this can be triggered by issuing an executive directive to that effect by the Governor. This directive provides a framework within which the state can function and evolve under the clear direction that it is guided by open government. It is not necessary to recreate standards. Instead, the state can adopt and adapt existing open standards as well open source solutions.

The next step would be to build a simple platform that exposes the underlying data from the state’s systems. This entails ensuring that Massachusetts state internal systems are service-oriented. It is necessary to first audit this and improve it prior to exposing the underlying data.

Once the platform is set, the state can start deploying some of its services on the platform, to lead by example. The state was a leader in providing universal healthcare in the entire United States. One of the services that can be built by the state on the platform is a healthcare service. This would also allow other players to come in and mine data for various other applications and uses, in line with set standards. Other state-mandated services such as licencing, state taxes among others can be deployed on the platform.

Governing model.
Europe and other countries like China favor extensive regulation by the state to achieve what they call platform fairness (2). The state has a duty to regulate the conduct of citizens, natural or corporate. This is to ensure that use of public data and all public assets conforms to the set standards and is not contrary to good public policy. In the same vein, it important for the state to ensure platform fairness to facilitate and engender anti-trust monopolistic behavior in a manner that stimulates innovation and competition.

The governance model proposed is a consultative one, where a mechanisms such as a board, is facilitated through the Governor’s directive order. The board will have representatives of the private sector. The issue of control of servers in this situation becomes paramount. Given the public nature of data exposed by the platform, it is recommended that the state retain control of the servers, while ensuring that the consultative board collects all feedback from consultations to ensure the feedback is implemented promptly by the state.

In conclusion, the Internet is an open platform and its evolution has created immense opportunities to deliver more open forms of government that harness the collective wisdom of citizens. For the state of Massachusetts to harness that wisdom, GaaP is the way to go. Such a platform will ensure that the private sector and talented individuals can in multiple ways leverage public data to create service innovations that can change the state in ways unimaginable.

1. O’Reilly, T (2010). Government as a platform. https://www.mitpressjournals.org/doi/pdf/10.1162/INOV_a_00056

2. The Economist (2016). Regulating technology companies, taming the beast.  https://www.economist.com/business/2016/05/28/taming-the-beasts  (retrieved on 2/10/2018)

Lean starting up and how to be agile: A lesson from the past.

4

Success is great. But failure can be greater, if you take the lessons! This is not to say failure is not annoying! It is annoying. Yet when you fail, you learn. You learn how to fail, and how not to fail. Better still, failure gives us lessons on how things should have been done, had we known better. Being in business school making armchair analysis gives you an air of invincibility, but getting into the actual act of managing projects, running a business or setting up one is a different ball game, which provides hard lessons. I call it a university of hard knocks. Here is a typical story of failure, from which I learnt lessons on what to avoid in my subsequent ventures. This account emphasizes two approaches relevant to executing projects, namely the agile approach, as contrasted with the waterfall method.

My first e-commerce project was in fact motivated by my business school research on retailing business models. At the time, Amazon was rising. So was Rakuten, the largest e-commerce business in Japan. The question on my mind was, why not explore that in Africa? I knew there were some known knowns, known unknowns and unknown unknowns in such a venture. For example, I knew the existence of retail businesses in Zimbabwe meant there were retail customers. I also knew there were no online payment options, and it was necessary to set-up one. I also knew there would be delivery hassles, among many others hiccups. In short, there was no e-commerce ecosystem. I was also clear about the business model to explore. I also knew I had to bootstrap the start-up, given lack of venture capital. But there were other things that were unknown to me at the time, the agile approach to executing such a venture. Had I known better, I would have executed it differently.

With my friend Mansour Ali, we mapped the project plan in his apartment in Fukuoka. After years of doing projects in banks, making sure there was a project plan prior to starting a new project had become second nature to me. Where is the project plan? Who is the project owner? Who is the project manager? What are the steps to the project? Which group of activities go first? What is the critical path?  These were key questions I sought to address first. It was all a silo and compartmentalized mindset acquired from previous experience running banking projects.

I was the project sponsor, Mansour was the project manager working on the platform development. I was to fund everything necessary, implement the project in Zimbabwe and lead the deployment, stakeholder management and business development. With hindsight, our approach was a typical waterfall approach: build the online marketplace, add as many features we deemed necessary as possible till it looks great, perform an internal test of the platform, then sell the platform to existing retailers so that they can sell their products to a broader audience online, find a payments solution and figure out how to execute delivery, then launch, and succeed!

The entire process took several months. It was complicated by the fact that we both had to work full-time on our other jobs. Mansour ran his other business while I worked separately elsewhere. It was worsened by the reality that he sometimes traveled halfway around the world to Oman, where he was developing a car exports business. Each time he traveled there, he would stop working on the project on account of internet controls in the host country. As the key developer and project manager, this caused serious delays. This complicated the waterfall approach we took as it took very long to launch the end product, an multiple merchant e-commerce marketplace.

Pazimba.com homepage

After over 9 months of working on the project, and an investment worth several thousands of dollars in time and money, we launched the online marketplace in the last quarter of 2014. We got fairly good media coverage in key technology websites and daily newspapers. At that point we had a handful of very good merchants in the country who signed up to run virtual stores on our platform, including a well established book publisher with a very rich catalog going back several decades. Getting merchants to sign up turned to be a complex task. It was a long pipeline that involved travelling across the country to make presentations to each of them. In each case, we had to explain the features and benefits of running a web-store. What became clear was that most of them vaguely knew about e-commerce, but had never thought about using the internet to sell their products. In general, they were not convinced they needed to have resources set aside to run a web delivery channel. This started creating several unknown unknowns we had never factored in our waterfall development approach. Though we got some merchants to sign up alright, the biggest unknown unknown was that we had to do much of the work ourselves because merchants neither had the time no human resources to either update their own catalogs or processes deliveries. This left us running from pillar to post to cover all the gaps. The work was tedious. Finding a delivery solution was also not easy. Fortunately, unlike other African countries, like Nigeria, Zimbabwe has an effective postal address system. We therefore partnered Zimpost (the national postal service), which delivered packages reliably at a low price. The price was low at the beginning, but was changed upwards a few weeks later. We did not have enough critical mass to negotiate the price downwards. All these dynamics were unknown unknowns.

After a few months working on the venture, it became obvious that we were bleeding money. Mansour was getting disillusioned. It became clear that we has to scale down the project. And we did. It was not necessary to continue bleeding more resources.

Pazimba.com marketplace

The question is what should we have done differently. It became clear several months later that Zimbabwe was just not ready for e-commerce. What were my assumptions, that made me proceed anyway with the project in spite of all its blind spots? I had proceeded because I believed that Henry Ford’s quip, “If I had asked people what they wanted, they would have said faster horses,” would be turn out true in our case. So we did not engage the market as we as necessary as we proceeded to implement our solution, at a huge cost.

Yet we could have done things differently. How? Had we been more agile in our deployment, we could have seen sooner, at far much less cost, that there was not much light at the end of the tunnel. How? Had we followed the agile lean start-up approach, we could have tested our idea before proceeding all pistons firing to implement it. We could have tested our assumptions and hypothesis first by engaging merchants/retailers to assess their appetite for online delivery channels. In addition, we should have sampled potential buyers first, to validate our assumption that they would somehow use the solution. We knew that buying anything online had not grown on our potential customers, but we assumed our efforts would change their behavior. We should have validated those assumptions through an agile approach. What else could we have done? Had we established some level of appetite from customers and some merchants, instead of spending months building a fully fledged multi-merchant e-commerce platform, we should have built a thin minimum viable product off commercial templates to prototype and validate our assumptions. We could have done this by setting the prototype platform one one merchant to gauge the potential interest from consumers.

As it turned out, we did not, at great cost in terms of money and time. Lesson: always implement any venture or project in agile fashion. Validate your assumptions, figure out how to prototype and deploy a minimum viable product, and be sure its worth proceeding before investing huge amounts of money that are irrecoverable way later. 

Hello world!

9

Welcome to my Weblogs at Harvard.

Log in