A publisher tries the OpenAI chatbot

I’ve been hearing a lot about advanced AI content creation models, and decided to give one of them – OpenAI and the OpenAI chatbot tool – a spin. Over the past 30+ years, I have seen advanced digital technologies upend the ways in which creators make and sell media, and audiences consume it. Creators generally embrace such technologies and associated tools, even as media businesses have struggled.

So can OpenAI (and similar AI-based applications) be viewed as yet another powerful technology in the media creator’s toolbox, much like other transformational technologies have served in the past? Or will it outright replace writers, designers, musicians, and other creative professionals and the publishing businesses that employ them? Some artists are indeed very worried, after seeing AI-created examples that mimic human artists so well that one “painting” even won an award:

There is no doubt that OpenAI is very powerful. Check it what it spat out when I played in the OpenAI API playground, using a prompt “explain the Lean Media framework” and the “text-davinci-003” model:

The Lean Media framework is a framework for making media production more efficient and cost-effective. It is based on the Lean Manufacturing principles developed by Toyota in the late 1980s. The Lean Media framework focuses on a continuous process of eliminating waste and optimizing efficiency in media production. It is designed to help media companies reduce costs, improve quality, and increase productivity. The framework emphasizes the importance of using data and analytics to inform decisions, streamlining processes, and focusing on value creation rather than cost cutting. It also encourages collaboration among teams and stakeholders to ensure that everyone is on the same page with regards to goals and objectives. The Lean Media framework is designed to help media production become faster, more efficient, and more cost-effective.

That’s very good! I should know, because I created the Lean Media framework and wrote a book about it. Readers may even remember that some of the early concepts were fleshed out on this blog 10 years ago. I believe the summary is based on various public sources, including blog posts, articles, press releases, possibly the Google Books excerpt, or samples of the book (legal and otherwise) from other sources.

But then things started to go off the rails. I asked the OpenAI chatbot “who invented the lean media framework”?

The Lean Media Framework was developed by the Lean Media Lab, a research and design collective based in Los Angeles. The lab was founded by media scholar and professor Dr. Juliette De Maeyer and media makers and technologists Esther Robinson and Sarah Szalavitz.

Hmmm. Is there another Lean Media framework floating around? I couldn’t find it online, or even the “Lean Media Lab.”

So I re-entered the prompt “who invented the lean media framework.” It gave a completely different answer:

OpenAI media creation example

I started re-asking questions about the framework, using slightly different phrasings. More unfamiliar answers came up that completely contradicted earlier answers:

openai history examples

What’s going on here? How can the AI give different answers to the same questions, or even apparently “wrong” answers? It’s a hard question, because the “black box” design of most AI systems means that even its creators are unable to explain how certain answers were obtained:

In machine learning, these black box models are created directly from data by an algorithm, meaning that humans, even those who design them, cannot understand how variables are being combined to make predictions. Even if one has a list of the input variables, black box predictive models can be such complicated functions of the variables that no human can understand how the variables are jointly related to each other to reach a final prediction.

There are exceptions to the black box problem, such as leela.ai. There are also small, amusing examples of AI failures that Google’s AI generates in response to search queries, but the potential for harm is real, as I pointed out when I queried Google last year about “When Neil Armstrong set foot on Mars”:

neil armstrong mars google AI AI researchers, including OpenAI itself, acknowledge there is a problem:

The OpenAI API is powered by GPT-3 language models which can be coaxed to perform natural language tasks using carefully engineered text prompts. But these models can also generate outputs that are untruthful, toxic, or reflect harmful sentiments. This is in part because GPT-3 is trained to predict the next word on a large dataset of Internet text, rather than to safely perform the language task that the user wants. In other words, these models aren’t aligned with their users.

Clearly, there is still a lot of work to be done. But there are a few important conclusions:

  • AI models will improve.
  • AI tools for media creators will improve.
  • We will see AI-generated content with a higher degree of quality (editorial, visual, and so on).
  • “Accuracy” based upon existing inputs will improve.
  • Humans will attempt to “game” AIs to produce desired communication, business, or creative outcomes. This may be done by training them on unusual data/inputs (including data/inputs at scale) or tweaks to the models.
  • Creators will have to monitor AIs to protect their intellectual property and creative rights. We are already seeing this emerge as an issue with Github Copilot, a tool which generates quick generic code blocks for developers to use but also copyrighted code with no attribution.
  • Media creators will learn to harness AI, just as they have done with earlier technologies and tools.

 

Ancestry’s indexing experiment with firms in China

I follow genealogist Michele Lewis on TikTok. She recently found an unusual Ancestry.com transcription from the 1820 Federal Census. Check out the handwritten first name. What does it look like to you?

ancestry index outsource to china

Now, I get it that a 200-year-old handwritten scrawl can be hard to read. But how could a transcriber even consider “Elizabether” in this case?

I think I know the answer. In 2008, I worked for an online technology publication, The Industry Standard (no longer online). I interviewed Tim Sullivan, CEO of The Generations Network, which was Ancestry.com’s official corporate until 2009. The article was published on October 3, 2008, on the website of The Industry Standard (see image below).

In the interview, Sullivan noted that computers were “not even close” to being able to read handwritten records, especially those from disparate sources such as census records which have many different styles of handwriting.

So Ancestry turned to human transcriptionists. Paid transcriptionists, not volunteers like on FamilySearch. Sullivan told me:

“The vast majority of the investment we’ve made in the last 10 years is not in acquisitions costs or imaging costs, it’s in the indexing costs.”

At the time, Sullivan said Ancestry was paying $10 million per year to transcribe old records. To cut costs, Ancestry hired overseas partners in China where English was not widely spoken, but they can get census records transcribed for less money:

So how did The Generations Network import the data from millions of old census forms into its online database? Sullivan says the company spent about $75 million over 10 years to build its “content assets” including the census data, and much of that cost went into partnering with Chinese firms whose employees read the data and entered it into Ancestry.com’s database. The Chinese staff are specially trained to read the cursive and other handwriting styles from digitized paper records and microfilm. The task is ongoing with other handwritten records, at a cost of approximately $10 million per year, he adds.

If you have ever tried to read old handwriting in an unfamiliar language, I am sure you can appreciate how difficult this task would be. But the lack of quality checks and nonsensical transcriptions is stunning. Keep in mind that Ancestry charges customers lots of money (up to 25% more as of January) but its main focus is generating profit for a string of private equity firms. Its current owner is a Wall Street PE firm, Blackstone Inc. It’s not clear if Ancestry still outsources its transcriptions to overseas firms, or if the OCR technology is good enough to hand off the task to computers.

Regardless, what’s especially frustrating is Ancestry customers have attempted to correct this particular error. The actual name is “Christopher Orr.” They’ve added the correct annotation multiple times, but Ancestry still shows the name from that 200-year-old census return as “Elizabether Orr.” Lots of people searching for this ancestor will never find him, thanks to Ancestry’s cost-cutting moves 15 years ago and lack of quality checks to correct such errors.

As Lewis notes at the end of her video, “Maybe you’re going to have the hand-search the indexes one at a time” to determine what the actual name is.

Archive of “Google stays mum on plans for public documents, Ancestry.com points to OCR hurdle.” By Ian Lamont. Published 10/3/2008, The Industry Standard.

ancestry china outsource index transcription 2008

 

Ray Kurzweil on long-term document storage and the genealogy connection

In 2008, I interviewed author, inventor, and futurist Ray Kurzweil for Computerworld. The focus of the published interview (no longer online, sadly) were some of the more startling concepts in his book The Singularity Is Near. But there were a few other bits and pieces that were just as interesting to me, including his thoughts about long-term document storage.

In my mind, I’ve gone back to that interview several times in the past 13 years, mulling over the implications for innovation in my own genealogy business, which sells paper genealogy charts and forms as well as genealogy PDFs. I decided to dig it out the transcript and share an excerpt below, and give some follow-up commentary about the implications for long-term document storage for genealogists.

Ian Lamont: In the Singularity is near, you also discussed an intriguing invention, which you called the “Document Image and Storage Invention”, or DAISI for short. But you concluded that it really wouldn’t work out. Could you talk a little bit about that?

Ray Kurzweil: That’s interesting. I don’t usually get asked about that, because it doesn’t seem like that interesting an issue.

Ian: It’s interesting to me, because I think I fall into the same category as your father, someone who likes to save all the documents and things related to their lives. I’d buy it!

Ray: Well, we have the same inclination, I inherited that from my father, and I inherited 50 boxes of his documents which was all his letters and so on. And I’ve kept … I have several hundred boxes of documents, and now of course I have a lot more stuff electronically, which is also not very well organized.

The big challenge, which I think is actually important almost philosophical challenge — it might sound like a dull issue, like how do you format a database, so you can retrieve information, that sounds pretty technical. The real key issue is that software formats are constantly changing.

People say, “well, gee, if we could backup our brains,” and I talk about how that will be feasible some decades from now. Then the digital version of you could be immortal, but software doesn’t live forever, in fact it doesn’t live very long at all if you don’t care about it if you don’t continually update it to new formats.

Try going back 20 years to some old formats, some old programming language. Try resuscitating some information on some PDP1 magnetic tapes. I mean even if you could get the hardware to work, the software formats are completely alien and [using] a different operating system and nobody is there to support these formats anymore. And that continues. There is this continual change in how that information is formatted.

I think this is actually fundamentally a philosophical issue. I don’t think there’s any technical solution to it. Information actually will die if you don’t continually update it. Which means, it will die if you don’t care about it.

That’s true of our own lives. People don’t care about themselves, don’t in fact survive very long. We have to continually maintain ourselves as biological entities, when we can make that transition to nonbiological, we’ll still have that same issue.

Ian: You said there’s no technological solution. What about creating standards that would be maintained by the community, or would be widespread enough that future …

Ray: Well, that helps for awhile. We do use standard formats, and the standard formats are continually changed, and the formats are not always backwards compatible. It’s a nice goal, but it actually doesn’t work.

I have in fact electronic information that in fact goes back through many different computer systems. Some of it now I cannot access. In theory I could, or with enough effort, find people to decipher it, but it’s not readily accessible. The more backwards you go, the more of a challenge it becomes.

And despite the goal of maintaining standards, or maintaining forward compatibility, or backwards compatibility, it doesn’t really work out that way. Maybe we will improve that. Hard documents are actually the easiest to access. Fairly crude technologies like microfilm or microfiche which basically has documents are very easy to access.

So ironically, the most primitive formats are the ones that are easiest.

So something like Acrobat documents, which are basically trying to preserve a flat document, is actually a pretty good format, and is likely to last a pretty long time. But I am not confident that these standards will remain.

I think the philosophical implication is that we have to really care about knowledge. If we care about knowledge it will be preserved. And this is true knowledge in general, because knowledge is not just information. Because each generation is preserving the knowledge it cares about and of course a lot of that knowledge is preserved from earlier times, but we have to sort of re-synthesize it and re-understand it, and appreciate it anew.

As a genealogist, I have thought a lot about solutions to preserve data for the long term that don’t have physical limitations of microfiche or paper media, or the problem of computers crashing, subscriptions lapsing, or for-profit online services shutting down (see “Ancestry deleted 10 years of my family’s history“)

Maybe 10-15 years ago, a few people in the Silicon Valley futurist community came up with the idea of a ball or disc etched with gradually smaller text an excerpt from the Old Testament, translated into multiple languages. It was actually called the “Rosetta Disc.” The plan to seed the discs across the world so even if there was some great calamity or the loss of written languages, future civilizations could resurrect them. Here’s what the disc looked like:

rosetta disc concept photo

Here’s how the concept was described:

The Rosetta Disk is the physical companion of the Rosetta Digital Language Archive, and a prototype of one facet of The Long Now Foundation’s 10,000-Year Library. The Rosetta Disk is intended to be a durable archive of human languages, as well as an aesthetic object that suggests a journey of the imagination across culture and history. We have attempted to create a unique physical artifact which evokes the great diversity of human experience as well as the incredible variety of symbolic systems we have constructed to understand and communicate that experience.

The Disk surface shown here, meant to be a guide to the contents, is etched with a central image of the earth and a message written in eight major world languages: “Languages of the World: This is an archive of over 1,500 human languages assembled in the year 02008 C.E. Magnify 1,000 times to find over 13,000 pages of language documentation.” The text begins at eye-readable scale and spirals down to nano-scale. This tapered ring of languages is intended to maximize the number of people that will be able to read something immediately upon picking up the Disk, as well as implying the directions for using it—‘get a magnifier and there is more.’

On the reverse side of the disk from the globe graphic are over 13,000 microetched pages of language documentation. Since each page is a physical rather than digital image, there is no platform or format dependency. Reading the Disk requires only optical magnification. Each page is .019 inches, or half a millimeter, across. This is about equal in width to 5 human hairs, and can be read with a 650X microscope (individual pages are clearly visible with 100X magnification).

The 13,000 pages in the collection contain documentation on over 1500 languages gathered from archives around the world. For each language we have several categories of data—descriptions of the speech community, maps of their location(s), and information on writing systems and literacy. We also collect grammatical information including descriptions of the sounds of the language, how words and larger linguistic structures like sentences are formed, a basic vocabulary list (known as a “Swadesh List”), and whenever possible, texts. Many of our texts are transcribed oral narratives. Others are translations such as the beginning chapters of the Book of Genesis or the UN Declaration of Human Rights. …

I looked into the details of this project, and wondered if it could be applied to genealogy. I was also thinking about the ancestor tablets found in many home shrines in Taiwan, long-lasting physical manifestations of a person’s lineage which are brought into people’s religious beliefs and ceremonial practices.

However, whether it’s stone, wood, or high-tech micro-etchings, there are practical limitations of applying this idea to genealogy or any written record, including cost and the inability to update the text. For instance, a separate project, NanoRosetta, is a fantastic application of microetching digital images on nickel to create a permanent archive, but it can’t be updated and requires a fair amount of file preparation (PDF and TIFF) that not everyone is capable of doing.

It made me think that a more realistic solution to the genealogy preservation problem aligns with Kurzweil’s “most primitive” take: Preserve core records on paper, share them widely with relatives and cousins, and use an easy-to-understand versioning system. This could also be applied to other family records, including letters, manuscripts, and more.

We know high quality paper can last hundreds of years. It can be easily copied and spread, potentially allowing the information to last thousands of years, as evidenced by Roman, Greek, and early Chinese dynastic records and literature that can still be read today.

Updated my Excel Basics book for Excel 2019

Earlier this month I released a revised version of Excel Basics In 30 Minutes. This is the third edition of the book. The first, published in 2012 under a slightly different title, showed users how to use Excel 2010 (for PCs) and Excel for Mac (which was then a very different software product). The second edition, published in 2015, covered Excel 2013. A few years later, I updated the second edition for Excel 2016 as well as Excel Online and the mobile apps for iOS and Android.

Now it’s 2020. Besides refreshing the book for the latest desktop version of Excel (Excel 2019, part of the Office 365 suite), I also made a few other changes that were a bit more significant.

Streamlined Excel examples

I built the first edition around the story of three colleagues in a sales department, and how Excel could be used to track their sales and earnings. Through these examples, I introduced basic Excel concepts, including:

  • Functions
  • Formulas
  • Formatting data
  • AutoFill
  • Charts
  • Filtering and sorting data
  • Static cell references

The problem: the chapter on projecting sales was simply too complicated, and describing how to hide and manipulate data for different people was distracting. In the new edition of my Excel book, I only project the earnings of one of the colleagues in that chapter and simplify the step-by-step instructions and screenshots:

Excel Basics in 30 Minutes sorting exampleExplaining the Excel mobile apps

Before starting the new edition of Excel Basics In 30 Minutes, I hoped to be able to expand the sections on the Excel mobile apps for iOS and Android.

I quickly discovered that the apps, while very full-featured, are really hard to use with a touch-screen interface. This is particularly true for phones, where a lot of taps are required just to change a single cell.

The reason, of course, is the fact that most Excel spreadsheets are complicated, with lots of cells and columns and buttons and commands. It was designed for the keyboard and mouse, often requires accessing files that aren’t stored or exported locally (such as .csv files and charts). Shoehorning the user interface and features into a touch-screen device is really hard to do.

Excel iOS autosum example

So, while the book does explain how to use certain Excel features on small-screen devices (the example above shows Excel AutoSum for iOS), for many of the examples I advise people to use the desktop versions of Excel 2019 for Windows and macOS.

Excel alternatives: Google Sheets and Excel Online

One very popular feature of Excel Basics In 30 Minutes since the first edition is the inclusion of instructions for Google Sheets, a free Excel alternative. Readers and reviewers constantly remark about how helpful it is to be able to use Sheets with the book, either because they can’t afford Excel or their office or school uses G Suite (the Google equivalent of Microsoft Office). I’ve updated the latest edition of the book for Google Sheets, which hasn’t changed much since the last book update.

Excel Online has changed quite a bit. In 2018, when the last update to the second edition of the book was published, Excel Online was pretty bare-boned — almost a grudging freebie made available to counter the threat of Google Sheets. As of 2020, the Excel Online interface has been really improved, and there are a bunch of new features that weren’t there before, including filtering.

If you’re interested in learning more about the third edition of the book, check out the official website, excel.in30minutes.com:

Excel Basics book website screenshot 02102020

BIDMC Medical Grand Rounds: Major Advances in Gastroenterology & Hepatology: A Half-Century Retrospective

J. Thomas Lamont MD giving presentation

I recorded audio of the following presentation by my father at Beth Israel Deaconess Medical Center in Boston in January 2020 and later worked with him to incorporate the slides and post it as a video. The video not only gives a history of major developments in gastroenterology from 1965-2020, but also gives insights into how crucial discoveries and paradigm shifts (including those in other fields) can upend the established order. A transcript is included below.

Medical Grand Rounds: Major Advances in GI & Hepatology: A Half-Century Retrospective.

Presented by J. Thomas Lamont, MD (Rabb Professor of Medicine at Harvard Medical School). Sherman Auditorium, Beth Israel Deaconess Medical Center, Boston, January 9, 2020.

Watch the video here

Introduction by Nezam H. Afdhal, MD:

The Zetzel Visiting professor lecturer this year is Professor Thomas Lamont, who is, as you all know, the Emeritus Chief of Gastroenterology at BIDMC. Tom started his career at UCLA, where he did his residency, and then has the dubious distinction of having worked at every teaching hospital in the Boston area, except Tufts.

After his fellowship at MGH was on the Faculty at Harvard Medical School until 1980, when he became the chief of GI at Boston Medical Center. He then came to BIDMC to be the Chief of Gastroenterology in 1996. His career has spanned major achievements in all areas of gastroenterology. Many of you don’t know this, but he was one of the first people who worked out how gall stones developed in the gallbladder.

He published a paper that was the cover of Nature that illustrated why the stomach does not digest itself due to the interactions of mucins and the effect of acid on the mucin structure within the stomach. At BIDMC he and his research team worked on the mechanism of action of the toxins for C. difficile. These are just some of his scientific advances. He is a clinician, still sees patients today, and is a well sought after teacher, and has educated innumerable fellows and faculty. He’s been a mentor to many. He’s been a great friend to the GI division here at BIDMC. His lecture this morning is going to be a look back at what has happened in the 50 years of Tom’s career in gastroenterology. ,

Dr J. Thomas Lamont, MD:

This is the UCLA Medical Center where I was an intern in 1965. During my training there I thought that the wisdom and knowledge that I was taught was really top level. Looking back now, I realize that all of it has been wiped away, or superseded by new knowledge .What I was taught then was thought to be state of art, but eventually the bulk of it was discarded and replaced by more accurate and improved medical science

I used to think, and a lot of people believe that discoveries are incremental, that knowledge is added like bricks to a wall which you gradually build. But in fact scientific discoveries are primarily revolutionary not incremental. There’s a paradigm shift which is a radical change in the way we do things. It’s often disruptive, a word borrowed from technology, where the new discovery or invention blows up whatever was there before. Later in the talk I’ll show you some examples of disruptive discoveries and inventions in the field of Gastroenterology.

A major feature in the field of scientific discovery is resistance from the establishment. I can tell you that Boston has a very powerful medical establishment, and the resistance to some of what I’m going to show you was quite robust. So, if you’re interested in this topic, there’s a very important small book, about 100 pages long, by Thomas Kuhn called “The Structure of Scientific Revolutions”. Kuhn championed the concept of paradigm shift, in which scientists have to move away from something that has been accepted for a long, long time. And the new paradigm replaces the original paradigm, which eventually fades away.

For example, an original paradigm in GI was, “excessive gastric acid secretion causes duodenal ulcers”. This paradigm is what we thought up until about the late ’80s. But then we gradually learned that in fact Helicobacter pylori infection is the cause of the majority of duodenal ulcers.

Here is an overview about some of the major discoveries I’m going to talk about. There’s dozens of things that could be discussed at a talk like this, and I’m not even sure I’ll be able to get through the ones I put in the timeline here. I’m going to start with fiberoptic endoscopy, because I think that had the biggest influence on our specialty, and defined what it is today. The discovery of H2 acid blockers was a major advance, followed around the same time by the discovery of the role of Helicobacter, and the invention of proton pump inhibitors. The major discoveries in Hepatitis C virus infection have taken us to the point where this disease is now curable .

If you were a patient in 1820 in Italy, your doctor might approach you with one of these instruments, and it would certainly put the fear of God in you. You can imagine that this urethroscope might be quite uncomfortable. especially as the procedure was performed without pre-medication or sterility.

The GI tract is curvy. That’s the problem with these early endoscopes: they’re not flexible but rigid. Illustrated hers is the type of gastroscope that was still in use in Los Angeles when I was an intern. You can see the gastroscope there over Dr Schindler’s right arm. On his right is Mrs. Schindler who attended every endoscopy and who assisted in moving the patient’s head. During the procedure the patient’s neck would be hyper-extended to allow introduction of the metal endoscope. As you know the esophagus is straight; that’s the only reason why this technique worked at all. The scope couldn’t go around any curves and it was really quite a difficult and even brutal test.

These metal sigmoidoscopes were in daily use up until about 1985 or even 1990 at Boston teaching hospitals. We used plastic scoped that were disposable and every GI fellow and internal medicine residents learned how to do proctoscopy for colon cancer screening.

In 1842 a critical experiment establishes that light could be bent. As shown here a bead of light is transmitted through a tank of water, and you can see that the light follows the curvature of the water as it exits the container. The light actually bounces or is refracted off the side of the column of water. The next big discovery was the development by Narinder Singh Kapany of fiber optic cable, which eventually replaced metal cables and had a huge impact on the field of telecommunications.

Kapany and colleagues then had the idea that extremely thin flexible glass fibers could transmit endoscopic images. This was picked up by Basil Hirschowitz, a native of south Africa, who in 1953 was a GI fellow at the University of Michigan. He was already trained in endoscopy in England before he went to Michigan. Hirschowitz was trained in the Schindler type of endoscopy that I showed you earlier, but he realized that this older technology was difficult and dangerous because of the rigidity of the Schindler scope.

Hopkins and Kapany published a paper in Nature entitled “A Flexible Fiberscope Using Static Scanning.” What they reported was a flexible endoscope that transmitted light through 10,000fiberglass rods that were slightly bigger than a hair. The exciting new and innovative aspect of this paper was the fact that the endoscope was flexible, and that when the scope was bent the image was not distorted.

Hirschowitz heard about this paper from a cardiology resident who had heard about it at journal club in London. Hirschowitz flew to London, met Kapany and Hopkins in a pub, and discussed their invention. They were very encouraging to Hirschowitz and gave him a few glass fibers to take back to Ann Arbor. Hirschowitz returned to his fellowship at Michigan and built the first fiberoptic gastroscope with help from Larry Curtis and Wilbur Peters who were physicists. After a few years of trial and error, they produced the first gastroscope

GI doctors in the audience will notice there’s no wheels on this first scope, so it’s not steerable. It’s a side-viewing scope which makes it really difficult to insert. compared to what we have today, it is not very practical or useful. But this invention marked the beginning of fiberoptic endoscopy.

Hirschowitz, like a lot of fellows and young researchers in science, first tested the device on himself in February 1957. He managed to control his own gag reflex, passed through his esophagus and looked around in his own stomach. He then scoped a patient indicated on the slide as patient #2. A most remarkable aspect of this discovery was that it was not supported by grants and was carried out by a GI fellow, and two physicists who worked very hard to create this incredible advance.

Dr Hirschowitz then linked up with American Cystoscope Manufacturing Inc. an American device company that made rigid Urologic scopes who produced this first commercial fiberoptic gastroscope. Again, it had no wheels to allow steering, but it had an air channel and the optics were good enough to allow examination of most of the stomach. Hirschowitz himself published this seminal paper in Lancet in May of ’61 that described his experiences in a series of patients he examined with the new scope. In the next to last sentence he challenged the existing paradigm. “The conventional gastroscope ( Schindler gastroscope) has become obsolete on all counts.”

The invention of endoscopy was not a hard sell. Some of the other things I’m going to talk about later were met with strong opposition, but fiberoptic flexible endoscopy was widely and quickly accepted. By the time I was a GI fellow in 1971 and into the early ‘1980s gastroscopes had was being taught at all the Boston teaching hospitals. The opposition was not to the scope itself, but rather to the idea of fellows leaving the laboratory and learning what sounded like a surgical technique.

Certain research-oriented professors at Harvard were somewhat opposed to this. One of my mentors said to me,, “Once they taste blood, they’re gone forever.” He meant that once GI fellows had used the scope to diagnose a GI bleeder they would lose their interest in basic research.”

The impact of fiberoptic endoscopy on practice was massive. Currently about 100-million endoscopies performed a year in the United States, about two thirds of them by GI doctors. Flexible fiberoptic endoscopy has had important impact worldwide in many medical and surgical fields.

This slide compares how we treated common GI diseases at UCLA in 1965, and how we handles these conditions currently. For example patients with GI bleeding that did not respond to antacid therapy typically went to surgery. Nowadays we manage this situation with proton pump inhibitors, and if bleeding continues or is torrential then we manage the situation endoscopically. Variceal hemorrhage was a fearsome occurrence in the mid 1960s and if persistent life-threatening was referred for a portocaval shunt. Currently this operation is seldom performed here, and we rely almost entirely on endoscopic control of variceal bleeding and radiologic placement of portocaval shunts or TIPS.

Management of obstructive jaundice was very difficult because we did not have any imaging studies to examine the bile ducts. Sometimes we resorted to so- called steroid whitewash especially if you were afraid of doing an laparotomy. This involved administration of corticosteroids for a week. If the jaundice improved then it was likely not mechanical obstruction but hepatitis or some other form of cholestatic jaundice for which surgical exploration was not required. Treatment of achalasia typically required either a forceful dilatation of the lower esophagus which was quite dangerous or a surgical myotomy. Currently we can perform a myotomy through the endoscope to treat achalasia at our hospital .

Now I’d like to tell you about a few advances in endoscopy that have been made here in our division. new approach is called molecular endoscopy where the scope can analyze tissue from tumors using laser light scattering.

This approach was developed here by clinical investigators in our GI unit working with Professor Lev Perelman a physicist in the GI division who specializes in photonics. Dr Perlman and colleagues hand-built this equipment which includes a scanner that goes through the biopsy channel of the endoscope. This allows a laser beam to be aimed at the epithelial lining of the esophagus and then measures reflected light from the wall of the esophagus. As the scope is withdrawn the instrument turns so it’s like an internal CT scan of the esophagus, except it’s spectroscopy. The reflected light ia analyzed to detect dysplasia more accurately than the optical techniques and biopsy.

Here is some information on the first prospective randomized clinical study on artificial intelligence to improve colon screening for cancer. This work is being carried out here by Tyler Berzin and Jeremy Glissen-Brown, one of our fellows, and was just published in Gut. They report the ability of artificial intelligence developed through game technology, to assist a physician doing a screening colonoscopy. The main quality outcome of a screening colonoscopy is adenoma detection rate or ADR.The goal is o find all the adenomas and take them out and prevent future development of colon cancer. The arrows indicate the polyp detection rate, or PDR, using routine optical colonoscopy at 29% vs 45 %with the assistance of artificial intelligence. This innovative technology will have huge impact on the ability of endoscopy to find and remove colonic polyps, and will improve our ability to prevent colon cancer.

S Mel Wilcox, the division director at the University of Alabama, where Dr. Hirschowitz spent his career, stated that it Basil Hirschowitz created the field of modern gastroenterology. When I started my training, GI was a reflective, diagnostic, minimally invasive specialty similar to endocrinology. Now the field is closer to urology or ENT than it is to some medical specialties.

I’d like to turn now to story of discovery that started in Australia. Pictured here: a medical resident, the tired-looking fellow on the left. He was working in the Royal Perth Hospital and asked his boss, the head of the GI unit, if he could help him find a suitable research project. — He was becoming interested in gastroenterology, had a curious mind, and was keen to explore although up to then he had never done any research. His boss referred him to Robin Warren, shown here on the right, an assistant professor of pathology at the hospital.

Barry went over one afternoon and sat down with Dr Warren to examine pathology slides that Warren had been collecting from GI patients with gastritis and ulcers. Warren was especially interested in this silver stain of a spiral bacteria that was present in the stomach. Warren recognized that this organism had been described for at least 100 years. But so far it had not been identified or named, and its role in diseases like gastritis and ulcers was unknown to medical science.

Barry Marshall realized that some of the patients that Robin Warren was telling him about were his own patients. He had learned fiberoptic endoscopy, and he was performing biopsies on these same patients and knew their medical histories. So this clinical connection lit a spark, and the two of them teamed up and made a remarkable discovery. What they did was to simply correlate this finding with the presence of active chronic gastritis. They published their first paper, a brief letter to the editor in Lancet in 1983t, which described a series of their patients with active gastritis and the present of this curved bacillus in their stomachs.

They went on then to make further correlations between the curved bacilli and peptic ulcers and eventually gastric cancer. This discovery was innovative and totally new. and was met with huge resistance by the medical establishment. The notion that peptic ulcer was an infectious disease met with near universal rejection. I actually remember the journal club in a Boston teaching hospital near here where this was first presented. The discovery was universally rejected by most of us in attendance. The problem was that in certain parts of the world, 80% of the population were infected. How could something that common be a cause of a disease like peptic ulcer that only occurs in, say, 1% of the population?

Barry Marshall wrote in his note cards and some of his later publication “Everyone was against me, but I knew I was right.” So who was against him? The acid mafia, a powerful group of senior investigators who championed the idea that hydrochloric acid was the key to formation of stomach ulcers. When we were residents and fellows we had to know a lot about gastric hydrochloric acid secretion. So those who believed in the primacy of stomach acid were definitely strongly opposed to these Australian upstarts, Marshall and Warren.

Listed here are Koch’s postulates; the last two are particularly important in establishing the infectious etiology of a given disease. The pure culture, when inoculated into the experimental animals, must reproduce the disease.” “Microorganisms must be recovered from the diseased animal.”

In this instance the experimental subject was Barry Marshal himself He swallowed a pure culture of Helicobacter pylori that he had isolated from one of his patients that he had previously biopsied and cured. So he knew that the strain was treatable and curable. He drank the culture of H pylori and over the next several weeks developed severe acute Helicobacter infection with nausea, vomiting, and severe dyspepsia. He stated that his halitosis was so bad, that his wife told him that he had to sleep on the couch. After the infection was established he treated himself with the anti-Helicobacter therapy and completely recovered. His experiment was soon published in the Medical Journal of Australia in 1985.

Eventually the etiologic role of H pylori in stomach diseases was established without a shadow of doubt. That doesn’t mean that acid doesn’t play an important role in ulcer formation, An old dictum was, “No acid, no ulcer.” That’s pretty much true.

You can easily cure ulcers with proton pump inhibitors. But if you want to cure an ulcer permanently then you have to eradicate Helicobacter. In this study in the New England Journal patients with active duodenal ulcers and Helicobacter infection were first “cured” with, in this case with Tagamet for two weeks, and then they were randomized after the Tagamet was finished to either antibiotics for two weeks to get rid of the Helicobacter or placebo.

You can see here a huge separation of these curves. Without eradication of the H Pylori infection by antibiotics, most patients had recurrence of their ulcers by the end of the study. Many other studies like this finally established the important causative role of this pathogen.

Helicobacter pylori is probably one of the most common infections world wide; in some countries the prevalence overall is about 60- 80%. For example, in Bangladesh about 80% of children are infected by age 5, particularly in areas with poor access to clean food and water. The peptic ulcer rate is about 10% and it turns out to be the cause of a number of important diseases of the stomach and duodenum that are shown on this graphic.

About 100% of patients with Helicobacter get some form of gastritis. It’s not the only cause of gastritis but it’s a major one. Eventually, with chronic, lifelong infection, gastric atrophy may occur. This leads to achlorhydria and in some patients to pernicious anemia with vitamin B-12 deficiency. It’s thought that the majority of stomach cancers result from chronic infection with H pylori. This states that about 1% of infected patients will get gastric cancer. That’s an over-estimate; it’s more like 1 in 1,000, or perhaps even less than that. Infection is also the cause of MALT lymphoma which can be cured by treatment of Helicobacter without chemotherapy, and without removal of the stomach. As already discussedH pylori accounts for the majority of gastric and duodenal ulcers.

Marshall and Warren were finally justified in 2005 when they won the Nobel Prize for Medicine. A couple of blokes from Australia who had not done a lot of research at all, with very little support. they used equipment and tools that were right at hand. This seems to be a study that could have been performed by almost anyone. But they were the first, and their persistence in the face of heavy opposition payed off.

The medical treatments we had for peptic ulcer in the mid 1960s and right up to about mid-’70s was actually very limited and not very effective. The mainstay of medical therapy was antacids. House officers and trainees were expected to know the properties of antacids very well, including doses, and their side effects. We sometimes used a special treatment called the Sippy diet” which consisted of two ounces of cream every half hour alternating with an 30 to 60 ml of an antacid. But a large number of the patients developed severe and semi-acute atherosclerosis from the fat so it was finally abandoned.

At this hospital William Silen and colleagues developed a very unique approach to acute stress ulcers of the stomach with GI hemorrhage in patients in intensive care units. He and his colleagues published a report in a major medical journal that described a technique to reduce the acidity of the stomach. An NG tube was placed in the stomach and the pH was measured. Antacids were instilled through the NG tube to bring the pH above four. This technique was a common treatment for bleeding stress ulcers in that era just prior to the development of effective drugs to block gastric acid secretion.

When antacid therapy failed to heal ulcers and stop bleeding we then turned to surgery. The first thing was to cut the vagus nerve, a vagotamy, and then remove the distal half of the stomach to remove the ulcer and reduce some of the acid-producing cells in the antrum, and then hook up the small intestine. As you can see it’s not very physiologic. There were many side effects. And every GI fellow and house officer had to know the side effects of ulcer operations that were so frequent after these types of operations.

The introduction of acid-blocking drugs truly changed everything. Again there was some push-back from the establishment. Editorials appeared in the New England Journal from internists and surgeons decrying the overuse of acid blocking drugs across America. A quick survey of the in-patient service at Yale New Haven Hospital revealed that 56% of the patients were on an acid blocker. Some of us commented, “Why so low?” Perhaps these meds were over-used but their effectiveness and clinical impact justified widespread use.

Show here is a simplified diagram pf receptor-mediated control of hydrochloric acid secretion. It was known for many years that histamine could strongly stimulate gastric acid secretion. But it was thought that there must be a special receptor for histamine which eventually was called the “histamine 2 receptor.” The gastrin receptor was thought to be a major regulator, and then finally the vagus nerve through acetylcholine, all of them having separate receptors with somewhat different transduction mechanisms. The ultimate step in the acid secretion pathway was the proton pump which secretes a hydrogen ion into the lumen of the gastric gland in exchange for a potassium ion.

Smith, Kline & French started an acid blocker discovery program in 1964 under the leadership of James Black who had already invented and developed propranolol. He was a lead investigator in receptor-mediated physiology and was also an experiences medicinal chemist.

His research team at Smith, Kline & French in the UK finally developed cimetidine, the first billion dollar drug, which hit the market in ’76. It was a blockbuster drug. I was just starting my career as a faculty member and I remember the AGA Digestive Disease Week at that time was all about Tagamet. They were everywhere with it. People were just a-buzz about this new drug. So it was a pretty easy sell except for the over-use that I mentioned.

James Black and colleagues at Smith, Kline & French studied the histamine molecule and then made critical modifications. Substitution of the methyl group on the imidazole ring created an agonist. So that was actually the first discovery. Once you have an agonist — and it did not stimulate H1 receptors — so they knew that they were onto something. It took them about six years to develop this molecule, an H2 blocker which is Tagamet. They had a couple of earlier ones which were very effective but when they tested on patients caused agranulocytosis and pancytopenia. So there were a few missteps but in general it was a very successful campaign.

James Black was awarded the Nobel Prize in 1988 for his discovery of two major drugs: propranolol and H2 acid blockers. He shared the prize with Gertrude Elion and George Hitchings. for their drug development of purine analogs that were used successfully to treat a number of important diseases including autoimmunity, transplant rejection ,infections and gout.,

I would like to say a few more words about Dr. Gertrude Elion. She was born in Brooklyn, went to Hunter College where she started at age 15 and graduated summa cum laude. She then decided to go to graduate school for a PhD. but could not get into a chemistry doctoral program. She was told at one interview by the head of the laboratory, “You might be qualified but you would be a distraction in a lab full of men.”

She then worked for the A&P supermarket company testing the acidity of pickles. Dr Elion finally joined Burroughs Wellcome in 1944 at age 26 as an assistant to George Hitchings with whom she shared the Nobel Prize. She never received a Ph.D, but was awarded many honorary Ph.D.s including one from Harvard

Elion and Hitching developed these drugs two of which, mercaptopurine and azathioprine were used in gastroenterology. They developed the antibiotic trimethoprim, acyclovir, and the others listed there. These are all actually still in use in some form or another. All of these novel drugs were based on modifications of the purine molecule.

When I read about Gertrude Elion I became curious about how many women won the Nobel Prize. Of the 943 Nobel laureates awarded since 1901, only 53 were women. And the graphic shows by 20-year segments that the number of women awardees is slowly rising. In the last 20 years from 2001 to o 2018, the number has increased from previous 20 year segments. The trend is in the right direction and I suspect it will improve over the next 50 years.

Hepatology in the mid 1960s was not very well developed. I can basically summarize what we had then as nothing. We could measure AST and ALT, bilrubin, and Alkaline Phosphatase, and prothrombin time and albumin. We had no imaging, of any kind. As I mentioned before we frequently resorted to exploratory laparotomy as a major diagnostic test.

There were few effective treatments for cirrhosis. I remember one of my teachers I think in residency or perhaps for the fellowship saying that the prognosis of a patient with cirrhosis when you have a complication such as bleeding, ascites or infection was about the same as a person with stage 4 metastatic lung cancer which at that time was not very treatable. Our knowledge of viral hepatitis was quite limited. We used descriptive terms for such as infectious hepatitis, short incubation hepatitis, post-transfusion hepatitis, and several other designations which weren’t very useful.

Shown here is a timeline of post-transfusion hepatitis, a major clinical problem in the post war era as blood transfusions was widely accepted and became well-organized. The first big improvement in the rate of post-transfusion hepatitis was the discovery of the Australian antigen by Baruch Blumberg about 1970, a discovery which allowed screening of donor blood. The rate of post-transfusion hepatitis in the United States in 1965 was about 22%. So the blood supply was not very clean. A big issue then was heavy reliance on professional blood donors, about 40 % of whom were infected with a hepatitis B or C. .

Screening of donated blood for virus infections picked up with anti-HIV testing because some of those donors were affected by hepatitis viruses as well. the next step in screening ALT and AST to screen donated blood. The final step in making the blood supply safe was the introduction of screening for HCV in the 1990s.

The HCV story started with the discovery at the NIH of the so-called non-A, non-B hepatitis which subsequently was re-named hepatitis C. The virus was identified in 1989, and this led to the development of a screening test for test blood donors. Interferon therapy started in the early ’90s and then the big revolution in the last 20 years occurred around 2014 with approval of direct acting anti-viral drugs. It is predicted that HCV will eventually disappear about 2030.

These four researchers found a single clone of DNA in a sample of blood from a patient with nonA nonB post-transfusion hepatitis. This seminal discovery eventually led to isolation of the HCV, followed by complete genetic sequencing.

The global burden of HCV is huge disease, with about 2% of the world’s population being infected. This map shows the geographic distribution:: the redder the color the more prevalent the infection.

Initial infection with HCV can produce a number of different outcomes. The unusual thing about hepatitis C is the very high rate of chronic hepatitis. We learned that about 20% of acutely infected patients recovered but about 80% developed chronic infection with serious complications. including cirrhosis and hepatocellular carcinoma developing decades after infection, usually in individuals who were asymptomatic.

This slide shows a timeline of the treatment starting on the left in the early 90s with interferon, first for six months, then for 12 months. And then the addition of ribavirin which increased clearance from the blood of HCV. And eventually introduction of pegylated interferon, and finally the current agents of choice, the direct acting anti-viral drugs. Currently about 100% of people with infection can be cured by these powerful medications.

A lot of the clinical research on HCV eradication has taken place here at BIDMC, much of it under the leadership of Nid Afdhal and his colleagues who published this paper in the New England Journal in 2014. This landmark study was the first randomized trial demonstrating cure of HCV infection with all oral direct acting antiviral therapy with these two DAAs, together known as Harvoni. Currently, we’re at the point where this condition can be cured. These new drugs have revolutionized our approach to this silent killer.

Howard Gruber was a psychologist who studied the processes and backgrounds of discovery and invention. He wrote: “The power of the beauty of science,” I think you can substitute “medicine” there, “do not rest upon infallibility, which it has not, but on corrigibility, without which it is nothing.” Corrigibility, or correctability derived from the Latin word corrigere, “to correct.” I have shown examples of how previous medical paradigms have been corrected and replaced. As I mentioned at the beginning of my lecture most of what I learned about Gastroenterology in my early training at UCLA has been corrected. And the same will happen to you. So what you’re learning now is as correct as we can make it, but eventually it will be replaced. So stay tuned.

For more information, see the video of the slides (includes audio). J. Thomas Lamont MD’s bio.

A new Social Security guide gets an unexpected boost from YouTube

Last month, my company i30 Media released a two-volume guide to Social Security retirement and disability benefits: Social Security In 30 Minutes. This was a big project, but I was fortunate to work with a true pro, author Emily Pogue, who worked in human services for years and knew the ins and outs of various Social Security programs, including SSDI, SSI, and the gigantic retirement insurance system used by tens of millions of Americans. It’s especially important now, because of the pandemic’s impact on people’s ability to work.

Early reviews have been great. Here’s what Kirkus had to say about Volume 1 of the guide:

In this debut personal finance book, Pogue covers a wide range of topics, from who’s eligible to collect Social Security benefits to what useful information can be found on the Social Security Administration’s website—all in fewer than 100 pages, including a glossary.

The author walks readers through how Social Security benefits are calculated, the circumstances that can reduce them, and their long-term impact on total income. However, because many of these aspects are influenced by individual earnings and state regulations, the book offers explanations in general terms and encourages readers to consult experts regarding some of the more specific requirements.

Although the book’s primary target audience is readers planning for retirement, Pogue also explains how spouses and dependents may also qualify for benefits. Charts and examples make it relatively easy to understand how, for instance, one’s outside earnings affect benefit levels and tax rates, and readers will be able to easily use the provided calculation formulas.

The book also uses examples to encourage readers to make financially sound decisions, showing, for example, how collecting benefits as soon as one is eligible can substantially reduce one’s overall earnings.

The book is informative and easy to understand, which is no small achievement, given the many variables involved. There are several references to other books in the publisher’s series, such as the companion volume, which covers the disability portion of Social Security; there’s also an excerpt from a book by another author, Personal Finance for Beginners in 30 Minutes, Vol. 2. Despite these advertisements, however, the book is a solid account of how a complicated benefits system works, and it will be useful to readers looking for a concise introduction.

A Social Security explainer that packs a lot of information into a brief text.

NetGalley reviews were also very strong. I was particularly pleased to see this review of Vol. 1, which was also published on Goodreads:

I could not believe how much I learned. I have been reading the Social Security website and searching the web for info for almost a year straight and learned the answers to everything I was looking for and more in this short read. Thank you for making this book.

Another NetGalley review for Vol. 2:

In my work as co-director of an employability program for people with disabilities, one of the biggest concerns of those we support are questions around how working will impact their Social Security benefits. This short guide is informative, well written, and chock full of easy-to-understand details about the labyrinthine benefits world. I’ll be sharing much of this information with the families we support. A must-read for anyone who desires to know more about the process.

But some of the most interesting reaction to the guide has been on YouTube. When the books were launched, I created a few simple screencasts outlining some of the main points and posted them on the IN 30 MINUTES YouTube channel. Compare the number of views for the Social Security videos compared to the videos on other topics (i30 Media also publishes a book about Microsoft Word, Microsoft Excel cheat sheets, a Twitter guide, etc.):

YouTube Social Security videos grid with numbersMore than 5,000 views in six days for Social Security: SSI and SSDI, side by side? It’s a seven-minute video outlining some of the points made in one of Emily’s charts in the Volume 2. It currently has 14 “likes” and 2 “dislikes.” By comparison, a new video about Twitter animated GIFs received just 12 views in the same time period, and no likes or dislikes.

The activity on the ongoing series of Social Security videos is not just helping to stroke my ego or fulfill my latent dream to become a YouTube influencer (with only 2,760 subscribers, the In 30 Minutes YouTube channel still has a long way to go). It has three direct benefits to my business:

  • Book awareness. About 10,000 people have become aware of the titles and the author via the short introduction at the beginning of each video.
  • Brand awareness. I mention that I am the publisher of the guides, which include more than 20 titles.
  • Sales. I have a very primitive tracking system which shows when visitors from YouTube go to the official book website for Social Security In 30 Minutes, and from there I can follow sales via my own website or Amazon.

Upon seeing the success of the first two or three videos, I set out to record some more videos on the topic. But I have to be careful that the channel doesn’t become all Social Security all the time. Many subscribers are there for other topics (mostly technology related) so it’s important to serve that audience, too.

 

All about Amazon Transparency

Amazon has rolled out a new program for brand owners, manufacturers, and Amazon Sellers called Amazon Transparency. I’ve explained what Amazon Transparency is and the requirements for getting started over on the Lean Media blog.

In a nutshell, Amazon Transparency is the answer to a very vexing problem: How to crack down on counterfeiters who are using the Amazon Seller program to flood Amazon with counterfeit goods.

My Amazon Transparency blog post touches upon some of the legal aspects involved (First-Sale Doctrine and U.S. trademarks) but also gets into some of the practical requirements for Amazon Transparency, including production, packaging, UPC/GTIN registration, and more.Amazon Transparency

You may also be interested in my Amazon Deep Dive for Publishers.

A brief history of iPhone apps

iPhone appsIn prehistoric times, before Steve Jobs revealed the iPhone, primitive mobile phones and Palm Pilots ruled the earth. These devices came with simple games, utilities, and other small computer programs called applications (or “apps” for short). It was also possible to buy additional apps, which were usually sold by the wireless carrier or offered by the device manufacturer.

But after the iPhone was launched in 2007, followed by the iPod touch in 2008, the iPad in 2010, and the Apple Watch in 2015, Apple took apps to a whole new level. The company made it possible for independent computer programmers to create powerful apps for use with the touch screen interface and sell them for any price (or give them away for free). Consumers could quickly download the apps from Apple’s App Store.

The result was an explosion of apps. Besides the obvious (games, expense trackers, mobile newspapers, Facebook, etc.) a torrent of niche apps that anyone can download is available. They include:

  • Shopping apps for retail stores and e-commerce companies.
  • Social apps such as Facebook, and dating apps such as Tinder.
  • Games, from arcade classics to puzzle apps.
  • News apps that show articles and videos from local and international news organizations.
  • Banking apps that let users scan checks and make deposits, without ever visiting the bank or mailing a check to a processing office.
  • Streaming music and video.
  • Sports apps for professional teams and fantasy leagues.
  • Workout apps for custom routines and tracking.
  • Calculators, scanners, expense trackers, and other utilities.

There are now hundreds of thousands of apps that are actively maintained by the programmers or companies that created them.

Pre-installed Apple apps

A new iPhone comes with more than 20 preinstalled apps that were developed by Apple. They include:

  • App Store. Download paid and free apps.
  • Calculator. In landscape mode, it switches to a scientific calculator.
  • A simple calendar app that lets you set appointments and alerts. This can be synced with your Google, Yahoo, and Outlook calendars in the Settings app.
  • Camera. This app takes photos and videos, and allows simple editing of videos.
  • Clock.  This app shows the time zones of your choosing. Alarm and stopwatch functions can be activated in the app or via Siri.
  • Contacts. This app organizes your contacts, including phone numbers and email addresses. It can be synced with Microsoft Exchange/Outlook accounts and Gmail.
  • FaceTime. Live video chat with other iPhone/iPad/iPod touch users.
  • Health. The app gathers health-related data from the iPhone, connected apps, and connected devices including the Apple Watch and various third-party fitness trackers.
  • Mail. This powerful email program can handle personal and corporate email.
  • Apple’s Maps app looks great, and is integrated with Siri. An alternative is the Google Maps app.
  • Messages. A texting app that is integrated with your phone number and contact list.
  • News. This app lets you select favorite news sources and topics, which are then presented to you in a clean list of headlines and photos.
  • Notes. Take simple text notes with this app, using the virtual keyboard or Dictation.
  • Photos. View photographs, videos, and screen captures taken with your iPhone.
  • Safari, Apple’s mobile Web browser. An alternative is the Chrome app.
  • Settings. Manage hardware and software settings.
  • Apple Wallet. This app works with apps from airlines, hotels, retailers, and other companies to display and process coupons, boarding passes, and vouchers. Wallet is also used to change Apple Pay settings.
  • A no-frills Weather app that automatically shows the local weather if you are connected to a Wi-Fi or carrier network.

Superior alternatives to many of these apps (including Calculator and Weather) can be found in the Apple App Store.

(This post was excerpted from an IN 30 MINUTES guide that I wrote.)

Publishing a jQuery programming guide

Over the past year, I have done several content experiments or expansions in the In 30 Minutes series, ranging from cooking to health and medicine. In this post, I’ll be talking about the jQuery Plugin book that my company released this month. While software has been a focus of the series since the beginning, this is the first title that gets into making software as opposed to using it. The story begins last summer. I am a long-term member of the Hacker News community, and on a thread about ebook publishing I left this comment about best practices for experimental publishing. It got 16 upvotes, which was a nice validation — I am not a hacker, but I like to be able to positively contribute to Hacker News when I can. But the thread moved out of sight, and after a few days I forgot about my comment. Six months later, I received an email out of the blue. It started:

I’ve been checking out your “30 Minutes” series and was originally inspired to write my own ebook after reading your post on HN a few months ago. I have since wrote a small 48 page guide on “jQuery Plugin Development”.  I haven’t launched it yet, just waiting for some feedback after sending it to a few friends first.

The author was Robert Duchnik, a Canadian developer who was living in Thailand. We began corresponding, and tossed around the idea of releasing a programming title as an In 30 Minutes guide. This was an interesting area to expand into. Most In 30 Minutes titles are written for mainstream audiences. They range from Melanie Pinola’s book about LinkedIn to the experimental easy Chinese recipes cookbook on the iPad authored by Shiao-jang Kung. The jQuery Plugin guide was focused on a much narrower, highly technical niche audience. Marketing to this group would be a challenge.

Moving Forward With jQuery Plugin Development In 30 Minutes

Rob’s book had some big things going for it:

  1. He’s a jQuery Plugin expert, with many years of experience in the field and the operator of Websanova, an online resource devoted to jQuery Plugins.
  2. Rob has an existing audience, via Websanova. From previous releases by Melanie and Tim Fisher (author of Windows 8 Basics In 30 Minutes), I have found that those authors who already have existing online audiences have a huge advantage right out of the gate. Not only can they turn to their fans to purchase copies and help spread the word, but by virtue of the fact that they have already interacted with the audience over time they have an innate knowledge of the problems that readers face, and what people want to know. This makes for better books and a better author/reader relationship going forward.
  3. There was already a draft manuscript. It needed some light editing and a proofreader, but otherwise it was in pretty good shape.
  4. The manuscript was short. This is an asset, as we want readers to be able to understand the topic at hand in less than 30 minutes.
  5. The market for books about jQuery plugin development had a hole. Through discussions with Rob and a quick analysis of competing titles, I determined that there is a need for this type of resource (high-quality, quick-start programming guide) on this topic, especially if it were priced right.
jQuery plugins
jQuery Plugin Book

This last point is important. I am not talking about low-balling the competition. There are already lots of free online resources about how to write jQuery plugins. There are also a small number of books about jQuery plugins, but most of them are long and somewhat expensive. There was not much in the middle, in terms of length or price. This is where jQuery Plugin Development In 30 Minutes would live. Rob and I came to an agreement in January, and we moved forward with preparing the manuscript for publication. There were some new writing tools to try out, and some difficulties related to producing code blocks in Scrivener (my primary book production tool) but we established a workflow based on markdown and Github and published the title at the beginning of April. You can read the table of contents for the jQuery plugin book here. The title is available for the Kindle, iPad, Nook, and Google Play, as well as a paperback and a PDF.

In addition, you may be interested in reading some of Rob’s blog posts about jQuery plugin development:

 

 

Brazil’s SEED startup accelerator, and an opportunity for foreign startups

If I were 20 years younger, and not already dedicated to my publishing company, I would be all over this. The Minas Gerais State Government in Brazil (home of the local startup hub known as San Pedro Valley) has set up an accelerator program that is open to local or foreign startups. For teams that are accepted to SEED, the program provides living expenses, co-working space, startup capital (equity free!), and support for staying in Brazil (the announcement doesn’t state what the support is, but hopefully it involves helping with visas and other bureaucracy).

There is a deadline coming up October 17 for the next program, but there is also a late December deadline for the following session. Details are located on the official SEED website, but here are the most important elements from the announcement (I’ve also added the official infographic to the bottom of this post):

Seed capital, equity free, for each startup varies, approximately, between US$ 35,000.00 (for projects with two participants) and US$ 40,000.00 (for projects with three participants). Part will be transferred as monthly scholarships of US$ 1,000.00 for each project participant, to cover living expenses and guarantee exclusive dedication to their business while participating in the program. The remainder US$ 22,000.00 will be transferred to fund a high-potential prototype or a releasable version of a product or service.
SEED offers too exclusive mentoring program; an inspiring co-working space and connection to a global community of entrepreneurs.

Who can apply?
Participants must be 18 years or older, Brazilian or foreigners in condition to stay in Brazil for duration of the program (SEED will assist all foreign participants in meeting this condition). During their participation in the program, entrepreneurs must be willing to live in Belo Horizonte, contributing to develop the local startup ecosystem.

Startups can be from anywhere in the world and must be in an early stage, that is, planning or developing a high-potential prototype or a releasable version of a product or service

As I stated at the beginning of this post, if I were 20 years younger, I would apply in a heartbeat — and not just because Brazil and Brazilian people are so special. The financial support (no equity!) is important, as is the “cohort” experience of being with other like-minded startups from all over the world. But there’s something else: Living in a foreign country is not only a rewarding personal experience, it allows you to focus on activities and goals without the distractions and expectations of one’s home country. I spent most of my 20s living abroad, and was able to make connections and accomplish projects that never would have been possible if I had remained in Boston.

As it is, I am older now, and have a wife and school-aged kids and other obligations that prevent me from relocating to Brazil. In addition, while my In 30 Minutes venture leverages a number of e-publishing technologies to produce a Dropbox guide, a book that explains what Google Drive is used for, and the most recent release Twitter In 30 Minutes, it is not a pure tech startup, which is what SEED appears to be looking for.

SEED startup accelerator: Basic facts

The official SEED infographic contains more details:

Brazil SEED accelerator