You are viewing a read-only archive of the Blogs.Harvard network. Learn more.
Skip to content

Letter from a Luddite

I started this semester writing about my fear of killing penguins with too much printer paper. I have since had to halt that practice; printing everything in college is simply impractical penguins or not. So I have learned to be content highlighting in PDF and taking margin notes in notepad.

It’s not that I never liked computers. As a kid I was fascinated by the clunky desktop that sat in my father’s office, spent hours contentedly playing pinball, and considered Clippy a particularly helpful close personal friend.

My days of uninhibited computer exploring peaked at a time when desktop backgrounds were still blue skies and rolling green hills.

When I imagined a “coder” I thought of Wayne from Kim Possible—some geeky anti-social guy (though I’d always feel rather pleased albeit surprised whenever it was a woman) whose eyes never left the screen and whose body never left the comfort and safety of their bedroom. Nonetheless, it seemed to me that these hackers were capable of doing virtually anything (pun intended) with a few rapid keystrokes.

Any sufficiently advanced technology is indistinguishable from magic. (Clarke’s Three Laws)

Come middle school, I was compelled towards prioritizing other important and equally fascinating magic: managing crushes, learning how to do makeup, and buying and wearing the right clothes. By the time I came back to my past interest in computers, coding suddenly seemed impossibly complicated, uncomfortably alien, and ultimately inaccessible.

But I had found my way back to computers, not through the internet but through newer love—books. I have mentioned this book countless times already but Vikram Chandra’s Geek Sublime with its discussion of gender in early computing, personal and historical connections to the development of the tech industry in India, discussion of the beauty of language and code as language, and the aesthetic perfection of beauty as akin to divinity, I realized that I can still enjoy spend my time thinking about computers and technology even without the help and companionship of Clippy.

I choose to maintain an interest in computers not necessarily because it’s useful—in fact using computers often provokes anxiety and frustration for on a daily basis—but because I’ve come to understand computer technology as an extension of a long tradition of written and oral communication, a tradition that I feel far more comfortable accessing. The internet can be imagined as a high-speed printing press, block chain merely a hyper-secure ledger, Snapchat stories replaced the town crier, and computer programmers started playing at the philosopher’s question of God and Man with the creation of AI.

Each generation, buffered by its own hubris, believes that it is unique from the generation before and technology with its exponential trajectory does much to reinforce that paradigm. Yet even without romanticization, there is value in recognizing and learning from the technologies of the past even if we no longer consider them technologically relevant in this modern age. The printing press that created the book was just as revolutionary in its time as the explosion of computer technology and internet relevance that allows me to write this blog post.

While I can appreciate the significance of the culmination of this tradition in my lifetime as the digital age, this is all a very long-winded way for me to justify my past decision to refuse to learn more about how my iPhone works, or how to download Word, or why I should upgrade my iOs.

My friends roll their eyes whenever I tell them I don’t like to download excessive applications and laugh whenever I curse the computer because “the internet’s broken.”

The great challenge of the modern coder involves making the technical discussion of programming accessible to people who hold interest in other areas.  The culture of computer programmers must refuse myopic and exclusionary practice. I can’t offer enough appreciation for my computer science tech friends who put up with my endless barrage of often stupid but challenging questions and answer with patience, rewording technical answers into comprehensible concepts, making their knowledge accessible so we can have exchange conceptual ideas freely. Before this class, I never thought I would know about much less have interest in technology like blockchain, and while I doubt I will ever feel compelled to get excited about software upgrades or Alexa or Bitcoin the way some of my friends do, I am glad I have at least returned to the conversation.

On the Internet, everybody knows you’re a DOGE

Every once in a while I do an exercise. I draw up on a piece of paper a sketch of my phone. For each of the apps I use, I write the purpose of the app and try to come up with an alternative way of achieving the same ends. Since I was young, I have always been fascinated by compartmentalization. What I find most interesting about modern smart phones is the way they have created second-tier levels of compartmentalization that have consolidated an multifaceted array of functions into a single device. Whereas one might used to have carried an camera, music player, video recorder, phone, calculator, etc. all bulky antiquated uni-taskers by contemporary standards. That all of these tasks can be accomplished by a single device is mind boggling but the emergence of applications has done far more for expanding the reach of a handheld cellular device than simplifying the number of gadgets we carry around with us.

When written out, I realize I am far more dependent on my iPhone now for its applications than I may have been a just a few years ago before I got a smartphone. Basic functions like calculator functions and a top-rate camera even internet access are now expected features in a cellphone though I can recall a time in middle school when teachers would tell us that skills like mental math and the memorization of state capitals was an important pedagogical exercise because calculators and encyclopedias would not always be available to us at a moment’s notice—and that was only a few years ago.

There is the recurring argument that the more reliant we become on technology the stupider we become in real life. To the extent that I can outsource my mental faculties and memorization capacity to Google, I have less need to store such banalities as state capitals in my mind, freeing up headspace for different subject matter.

https://www.theatlantic.com/education/archive/2014/09/is-google-making-students-stupid/380944/

The next level of app involved capitalizing on innate human insecurity through the technological plane. As a luddite, I long for the day that civilization returns the pen, paper, and the printing press and I am an aggressive advocate for face-to-face interaction and frequent conversation however, I do not agree with the idea that technology is unique in making us feel more alone. Rather, technology because of how close it has integrated with our everyday existence, is simply magnifying and advantageously marketing what capitalist markets have always historically targeted which moody philosophers would argue for and iterate over and over which is that we all have insecurity and we all have desire. Insofar as desire can be continuously generated and insecurity is encouraged and allowed to fester products can be sold. This has been the logic of advertising for decades, but the ubiquity of social media has created new extremes emphasizing Lacanian jouissance—essentially participating in the creation of our online lives as commercials of ourselves.

Every time I do this exercise I decide to delete all my “toxic” social media for between one and three days after which point I usually re-download the applications over the course of a few days.

Because between the absolutely rudimentary and the absurdly superfluous lie harmless applications like Lyft, and Google Maps, and WeatherPup without which I would functionally incapable of getting around cities, incapable of getting around campus, and unable to dress properly for the weather at which point I decide, well I have this magical device that I can’t get rid of and so I sign back into Facebook and scroll through my feed once again. Though this is obviously far from perfect system for maintaining technological sanity, I find that each time I go back to social media, I am able to refine my taste so that I only follow and invest my time in what is actually important to me on social media: tagging my friends in dog memes.

Relevant sources:

https://www.facebook.com/groups/dogspotting/

 

95

On October 31st , exactly two hundred years ago, Martin Luther nailed the 95 theses or Disputation on the Power of Indulgences on the door of the Catholic Church in Wittenberg. It was an iconic moment that challenged ecclesiastical authority and a big deal for the course of European history.

Fortuitous and concurrent events that aided Luther: the invention of the printing press

I’ve heard the internet compared to the printing press specifically in this Martin Luther 95 Theses context a number of times now. The comparison isn’t wholly inaccurate or woefully presumptuous either, though I personally find it to be a bit over the top when those writing their theses and manifestos seem to insinuate a shared romanticism and identity with Martin Luther.

I think there are a couple of fundamental differences that really preclude the emergence of a modern Martin Luther. First, Martin Luther challenged pal authority initially within the system of the Catholic Church, claiming complete separation from the US government online as solvency isn’t really a Luther move. Second, the modern Martin Luther will likely be a collective body of shared opinion and thought that manifests itself as a governing body, Great Man history is a relic of the past.

To address the primary assumption that the internet is entirely comparable the printing press.

It’s interesting and useful to see the printing press and internet as analogous developments. Both allow for the relatively rapid dissemination of information making it available to a large public audience.

Both as information spreading mediums, have in turn posed questions and challenges to the incumbent institutions.

Where I think many of these “new Martin Luther” type manifestos miss out is the acknowledgement that print itself has gone through a legislative and social transformation that is directly observable and comparable to the internet.

To declare to the cyberbvoid that the government has no right regulating to me seems absurd when historical precedent boasts of intervention. The modern printed word has evolved to be subjected to punitive scrutiny. From banning topics on erotica to birth control to evolution from print, the government has had a hand in governing the sphere of the printed word whether fairly or not—there is no reason the government will not be involved in the online sphere of content as well.

The question is then to what extent should intervention be tolerated. What’s interesting about the modern”printing press, in my opinion, is its co-existence with a viable predecessor. To the extent that the existence of both has created a two-tiered system of information credibility is fascinating to me. What I see as the primary difference today between the printed and typed word is simply barrier for entry.

For the most part, authors still have to go through publishing houses, and numerous rounds of editing before thought is transubstantiated into page. While this is not to say that all the books in print are perfect, they are subject to two limiting factors that the average internet fiend is not: 1. Editing and 2. Time.

The “real-time” pace of the internet and the absence of the expectation for curated thought  creates a lot of nonsense and bullshit and that is arguably fine in a two-tiered system.

In fact, I think it’s exciting and vital that there be a free and open collective conscious space where ideas can be shared. All of human development has stemmed from the collaborative efforts of interactive knowledge sharing.

That being said, there a certain expectations for certain parts of the internet that the government and private companies can and should be responsible for regulating.

Thanks to net neutrality, the internet is, and will by the grace of God remain, an essentially public good. However, like with any “public space” there are nested “public/private” as well; so called “fractal discursions” that pervade our linguistic personas online that shape the way we think of our public and private selves.

Essentially, a public bathroom might be a public bathroom but the stall you do your business you would not hesitate to call private. Likewise, the internet may be a public platform, but we feel innately there are limits where the public/private distinction must be upheld. The role of governance is to sort through those boundaries and legislate accordingly.

Obviously, that isn’t nearly as simple as it sounds. Because the internet is a new media, many of us haven’t been socialized into drawing distinct well-informed lines between our public and private selves on the internet. Just think the difference between the Facebook profile of say an older professor and the the Snapchat story of say any middle schooler who enjoys using a googly eye filter.

The point of this is to say, we have better established norms regarding public private distinctions in more established realms of public life; we have yet to figure them out for ourselves online.

Apparently, it’s the role of social anthropologists to think about this kind of stuff. Drawing from a paper on the public/spheres by Susan Gal that I read for another class, I started thinking about how her analysis might apply to the internet.

The United States, as a capitalist country, is more concerned with this public/private distinction in terms of spatiality than non-capitalist countries in Eastern Europe. Because capitalism is intently concerned protecting private property, the question of intellectual digital property poses a paramount issue for governance. This becomes morally and ethically prescient when it comes to issues like revenge porn which the government has, for whatever reason, refused to act on with more alacrity and seriousness.

And it also raises questions for democracy and governance insofar as voting is a matter of the private self participating in an act of civic engagement with public consequence.

There is actually so much to talk about regarding this that in the spirit of the modern Martin Luther I’m just going to list some of my concerns in 95 words:

Revenge porn should be illegal.

Facebook and other public platforms should close accounts that create an atmosphere of fear or inteolerance.

Facebook should be held accountable for deceitful advertising.

*Companies should not censor art.

Sponsored content should be labeled as such.

Private companies need to collaborate with governments in monitoring content.

Facebook and other private companies should prevent the spread of false news that would lead to the targeting/harming of an individual.

Private companies should be held accountable for the loss of customer data.

I have five words left.

Revenge porn should be illegal.

Data, Democracy, and PUPPIES

When I was in high school, I was on the debate team for three years. As a relatively unknown and definitively underbudgeted team, we often took advantage of the open evidence files available to us. While this proved somewhat useful, we quickly realized that the majority of bulk of emergent scholarship was still largely inaccessible  or selectively available through paywall and at the end of the day, most of the interpretation, done in rounds by overconfident high school students, intentionally misconstrued or misrepresented scholarship to support their own arguments anyways. Analogously, while open data has the potential to increase transparency, it can just as easily be abused. In theory, open data initiatives democratize the balance of power in society and many have rhetorically phrased it as such; however, at the point where initiatives may either maintain the status quo or worse work to empower the privileged while failing to reach the most disenfranchised, the impacts must be carefully considered before we willfully embrace the “free exchange solves all” mentality.

Neoliberal thought regarding free exchange emphatically believes in the in infallible nature of open markets. The “marketplace” of data is the marketplace of ideas. And contrary to the collectively internalized belief that data is not biased, the human beings that create, evaluate, and use the data certainly can be.

For me, here are two issues that must be addressed in the subsequent conversation on open data sourcing especially as it pertains to governance:

First, who will use open data

And secondly, what will they use it for?

To answer the first question, I would offer two examples. Court proceedings are regularly documented and have been made open and available to the public for a long time. Yet, somewhat unsurprisingly, people are generally disinterested the vast majority of cases–so disinterested in fact that some have taken it upon themselves to spice up the information.

Court proceedings like  Rick Allen v. Georgia are so obscure and bizarere, that we naturally gravitate towards their absurdist but also hilarious ethos. The anomaly cases that usually have little political salience get picked up as entertainment. Had it not been for the animators, Rick and Morty I would never have, of my own volition, looked into the spectacular case which would have been at great personal loss. (As a warning, the case is incredibly crass and not at all appropriate for school) This case being the exception, the vast majority of court transcripts are remarkably boring.

Even Supreme Court cases often lack the dramatic salience necessary for user interest. Especially given  the restrictions regarding camera-use, the audio alone is seldom enough to attract attention.

In an episode of This Week Tonight, John Oliver candidly acknowledges this. Then, in the best way possible, he makes Supreme Court proceedings far more enjoyable (by using puppies!!)

https://www.youtube.com/watch?v=fJ9prhPV2PI

https://www.youtube.com/watch?v=tug71xZL7yc&t=31s

The point being, if the present public does not feel the inclination to look up open data sources now without dramatic intervention, it seems overly optimistic to believe a merely expanding the body of sources will radically change the public’s behavior to become more proactive about seeking out data.

Furthermore, to the extent that data collection sources are still created by and model human phenomenon, even if supposedly objective computer programs are tasked with interpreting a set of inputs, without contextualization, the output is not neutral.

Rob Kitchin of Maynooth University elucidates many of these same concerns explaining that businesses interested in open data, have “the real agenda…to get access to expensively produced data for no cost, and thus a heavily subsidised infrastructural support from which they can leverage profit, whilst at the same time removing the public sector from the marketplace and weakening its position as the producer of such data.”

He furthers that, “because open data often concerns a body’s own activities, especially when supplemented by key performance indicators, they facilitate public sector reform and reorganisation that promotes a neoliberal, New Public Management ethos and private sector interests (McClean 2011; Longo 2011).” Essentially, the rate at which human jobs will be lost computer automation will be radically accelerated by open data. Perhaps from a business and economics perspective, this is a great way to cut costs, however, from a humanist standpoint, it is rhetorically deceitful to market open data as a way to empower the people if the way in which this technology is introduced is intended to undercut jobs and push out workers.

Additionally, Kitchin offers a critique of so called “hackathons,” events that use the collective power of a roomful of computer programmers to go deep-diving for the data that source these initiatives. The critique is helpful in considering the ethos of programmer culture as relatively separate from and disinterested in civic impact.

http://progcity.maynoothuniversity.ie/2013/11/four-critiques-of-open-data-initiatives/

Furthermore, as per our discussion with David Eaves, the future of open data seems headed towards the private sector in spectacularly American open market fashion. With the latest Equifax breach, however, there is reasonable ground to consider if the structure of the private sector will be incentivized to present and protect data objectively and intentionally. Elizabeth Warren brings this up in a recent Senate hearing on the Equifax breech which I mention primarily because the way she handles questioning is incredible and very amusing.

https://www.youtube.com/watch?v=vudP3ROnFYI

So, given the concerns, I think the role of independent journalism in creating these types of open data sources is immensely important.

Because, journalism, unlike government or the private sector, is in theory intentionally framed as an contextualized argument devoid of any major interest outside of informing the public, it is uniquely positioned to collect and present data on civic society and general governance. For example, The Guardian launched an data initiative called The Counted that has been keeping statistics on  police shootings in the US since 2015. James Comey, the chief of the FBI has admitted is better than the data they collect.

https://www.theguardian.com/us-news/2015/oct/08/fbi-chief-says-ridiculous-guardian-washington-post-better-information-police-shootings

While I have my concerns, I nonetheless still believe that open data initiatives are generally a good thing. Inasmuch public data  can be a tool of the people to check the power of the government, there must be careful considerations made to address the potential exploitation misuse of open data to validate prejudice rather than inform change and acknowledge that for the future of open data and governance, the technical from the ethical can not be separated.

(AI)n’t She Sweet

Since mankind could dream of AI he has thought of her as a woman. the trope is so familiar in the shared imagination even Hollywood picked up on its salience. One of the earliest portrayals of the girl bot was in Metropolis. Releasedin 1927, the “expressionist sci-fi epic has influenced everything from Superman to Blade Runner.” The movie features the feminized robot, “False Maria” described as “the robot double of the peasant girl prophet in Berlin 2026, which unleashes chaos among the city’s workers and is ultimately burnt at the stake as a witch.” (https://www.theguardian.com/culture/gallery/2015/jan/08/the-top-20-artificial-intelligence-films-in-pictures)

Though the metallic art-deco aesthetic of False Maria may seem more C-3PO than Siri in terms of Artificial intelligence, she is nonetheless emblematic of the obsession with feminized AI. Even Siri gets sexualized in Her a movie wherein Joaquin Phoenix develops feelings for “Samantha,” a Siri type personal assistant personified through Scarlett Johansson’s voice.

Insofar as we are interested in AI, we are interested in whether or not AI is capable of love (specifically with us). The metric for determining intelligence is variable and often extremely specific. In terms of computational power, we are already well aware our innate inferiority to the unlimited capacity modifications capable of being made in computing. However, when it comes to artificial intelligence, the Turing test, demands a higher more human standard. The subjective judgement itself has remained remarkable consistent overtime: simply, is AI capable of love. More recently, Ex Machina came out as a movie obsessed with power of the feminized AI to convince male subjects that she was indeed real, indeed sentient, and in addition incredibly good looking.

The conclusion of the movie, no spoilers, raises questions about the objective moral judgements human intuition attempts to impose on “artificially” conscious beings. The feminization of AI is especially interesting in the context of the progression towards the singularity given that in Western culture has a history of debasing women’s intellectual capacity. Women are pejoratively described as less rational than men, yet personified AI, essentially just an immense amalgamations of billions of logic sequences, are consistently anthropomorphized and imagined as women. (The ultimate question being are humans just the amalgamations of billions of neuro-chemical impulses driving electrochemical action. But that’s a free-will discussion for another post.)

From a philosophical perspective, it seems unreasonable to assume that higher level intelligence, whether artificially created or not, so long as it is unrestrained in its adaptive learning powers, has no reason to trifle with the whims of human emotions—especially love. The reason, I think, so much of our creative imagination has been occupied with the obsession with sexualized AI ladybots is because of the historically  masculine labor force responsible for influencing both the technological projections and cultural reception of artificial intelligence. However, as we move towards a supremely intelligent AI, I do have to wonder, will the singularity have a gender?

Relevant Media

Futurama Season 6 Episode 9 “A Clockwork Origin”

Ex Machina

Metropolis

iLost Generation

Senior year of high school, I made the radical decision to delete Facebook off my phone. I unfollowed people on Instagram and  exclusively started following dogs. I never figured out how twitter worked so that was fine. The impetus for all this was college decisions. Ironic given where I am now I suppose but I remember feeling like each Facebook post declaring “______ class of 2020!!! #soblessed” was wrecking my sanity.

Since then, with sanity readjusted, I have re downloaded Facebook. I can justify it by saying that I use it for interacting and connecting with people I meet but in reality, I spend more of my time looking at Harvard’s Meme and Tourist Haiku pages.

I worry however, that the next generation of kiddos will not have the same kind of self awareness when it comes to their technology. I have seen groups of fifth graders walk through a Children’s museum where I worked completely glued to their phones, experiencing the world through Snapchat filters, and taking social inventory through likes and heart reacts.

Yet far from the usual condemnation, my experience inclines me towards existential concern. Because Facebook didn’t become the ubiquitous monolith it is at present until middle school, I can ground some of my notions of normalcy outside the realm social media. I fear that the next generation won’t have that advantage.

As I have gotten older, I’ve started to think more about how technological development shapes the way I perceive the world. Before getting a cellphone, and in particular a smartphone, I recall being far less concerned with immediacy of response. Now, getting left on “read” is the equivalent of getting stood up on a date.

The boundaries between technology and reality is rapidly dissolving and the Internet of Things is only speeding up this process.

The ubiquity of cellphone usage moves us even further to having “the world at our fingertips.” Given that the cellphone has become the universal remote of the future in addition to our primary mode of social interaction (arguably) there is reason to think about the potential ramifications on the collective social psyche.

The last great existential crisis happened after the world wars. From that we got the Postwar literature that would define the century—Kurt Vonnegut, Albert Camus, The Razor’s Edge—redefining themselves in attempt to find meaning and identity.

It’s probably a romanticized notion of the past, but I am compelled to believe that what so profoundly informed the world views of these great figures beyond whatever innate talent they possessed was the cultural norm of reading books.

While books are obviously not the end all for existential solvency, there was far more effort and time spent enjoying literature….time I now believe we spend mindlessly scrolling through Facebook. I’m not merely being facetious either. According to Pew Research, 27 percent of US adults did not read a single book within the last year. (https://www.smithsonianmag.com/smart-news/27-percent-american-adults-didnt-read-single-book-last-year-180957029/)

**https://www.loc.gov/loc/lcib/0806/reading.html

Books have been replaced by cellphones and to that extent social media in their occupation of our time, thoughts, and ideas.

We have become desensitized to tragedy because of a vicious media cycle and information proliferation. Young people will grow up in a world of near constantly comparison insofar as social media presence is coming close to transcending the technological-ontological plane. And what’s more, the incoming generation will have amount of self-awareness of technology’s impact once the internet of things seamlessly integrates into all part of our quotidian existence.

Because it’s convenient, we are apt to disregard security and conventional morality in favor of the next big thing in tech.

Technological advancement and integration is not inherently bad; it can be incredibly useful and enormously advantageous for our development as a species. But once the lower tiers in the hierarchy of needs have been met by these advantage, how will we confront the question of self-actualization? Of identity? Of genuine relationships?

All of this is to say, the time scale of human brain evolution is incomparable to that of Moore’s law. Unless we have some sort of philosophical paradigm shift that encompasses the material and ontological questions brought on by technological integration, we are set to be the first lost generation that can’t transcend existential despair. Until then, humanities concentrators will still have a job.

Additionally…

Two Black Mirror episodes that are super related to what I’ve talked about:

Season 3 Episode 1

Season 1 Episode 2

Manus x Machina

Since I was little, I have always been fascinated by clothes. I liked the way that what I wore could speak and question and have a conversation when appropriate even without words. I liked matching and clashing colors and textures and I liked how I moved differently when I wore dresses. When I grew up I was told that my fixation was frivolous, there were better more useful ways to spend my time. Still, I am fascinated by how the ways in which we dress convey ideas about ourselves and our society.

Recently, I was wasting time lurking on the Met’s YouTube channel and came across a video called “Manus x Machina (Hand and Machine).”

For context, Andrew Bolton, Head Curator of the Metropolitan Museum of Art and one of the most fascinating fashion experts in curation, envisioned the exhibition as a conversation between fashion and technology. 

The layout follows Diderot’s classifications of dressmaking tools and techniques as they were outlined in his 1751 Encyclopedia: embroidery, featherwork, artificial flowers, lacework, leatherwork, pleating, tailoring, and dressmaking.

 

At the heart of the exhibit is a gown titled “Wedding Ensemble” by Karl Lagerfeld for the House of Chanel. Made of a creamy white scuba knit fabric, the bodice is left relatively plain to highlight the twenty-foot long hand-painted gold foliage train. In accordance with the theme of the exhibition, Karl Lagerfeld used a pixelated version of his original sketch to use as a pattern for the train. Using computer technology, the design was then printed onto the train of the dress and painted in by hand. As the most visually and spatially dominant display in the exhibition, the Wedding Ensemble captivates and captures the attention of its viewers, inviting them to reconsider the ever evolving relationship between human craftsmanship and technological innovation in the context of fashion. 

 

Andrew Bolton’s curation is brilliant for the way in which it addresses modern anxieties around technology and innovation. Evidently couture has endured the cycles of technological growth and development throughout the last century. Despite the advent of the sewing machine (used to make a Paul Poiret coat in 1919) and machine-made lace (Coco Chanel, in the late ’30s) the symbioses of hand and machine prevailed; modern technological collaboration continues to visibly revolutionize the fashion industry. 

 

The relationship between fashion and technology is fast developing but in many ways inevitable. To the extent, that the concepts outlined in the paper on Man Computer Symbiosis are poised to come to fruition, fashion will be integral in incorporating technology into our everyday lives.  

 

Wearable technology is one example of this type of integration. 

Companies like Apple and Nike have already begun to capitalize on this intersection of fashion and tech (Apple was actually one of the sponsors of Manus x Machina). 

The Apple Watch is probably the most recognizable form of wearable technology that we have today but we already get the sense that the Apple Watch isn’t just a gadget. With different metal finishes and interchangeable wrist straps in an array of colors, users have choices in the aesthetic customization of their tech. It’s more than a little computer on someone’s wrist it’s a fashion statement.

 

Along the line of customization, technological innovations, and in particular 3-D printing, may radically change the garment making industry. Unlike the vast majority of other demand dependent economic sectors, fashion is uniquely indifferent to consumer taste. In any given cycle (Fall/Winter, Spring/Summer) a handful of top designers and fashion houses create and display their collections on runways which get bought up by high-end stores before trickling down to the average consumer as a ready-to-wear garment. (If you’re curious, Meryl Streep has a great line about this process in the Devil Wear’s Prada). 

 

As 3-D printing technology becomes more diversified and widely amiable, there is the possibility that it could revolutionize this cycle. Obviously, high fashion is an incredibly hierarchical oftentimes problematic industry. “Haute couture” is the most prestigious category of high fashion and is used to describe garments made exclusively by hand from start to finish. Though prohibitively expensive, couture pieces are valued for their unique attention to detail. Unlike ready-to-wear, couture is designed exclusively for the individual. Imagine in the future, though, that 3-D printing can offer the same level of specification and craftsmanship as a couture fashion house. High fashion would be instantly democratized. 

I know that fashion has a reputation for being frivolous or superficial but I fundamentally believe that fashion as an art, as a mode of expression is inextricably linked to the cultivation of the soul. In terms of technology and society, fashion is a reflection of our hopes and anxieties as well as a catalyst for transformation and progress. Fashion is dissolving the boundaries between man and computer, deconstructing the distinction between clothing and art, and pushing artistic expression towards democratization. Even if it is frivolous, I still think that’s pretty cool.  

Video

https://www.youtube.com/watch?v=yPWp9nkLhmA&t=18s

References (also contains pictures!!)

https://www.nytimes.com/2016/05/06/arts/design/review-at-the-costume-institute-couture-meets-technology.html

 

Wikipedia Kids

As a kid I used to spend hours in front of the laboratory computer Wikipedia page hopping while I waited for my parents to finish working. My favorite page was an entry on mermaids. I was accosted by words like “etymology” and “mythology,” names and references to the esoteric and arcane. Obviously I had no ideas what any of these words meant and so I set out clicking each highlighted blue word I came across that felt unfamiliar. Though seemingly unrelated the larger conceptual ideas that prefaced an otherwise unitary concept like “mermaids” actually found a foundation of knowledge that extended across the differentiating disciplines. This method of self teaching and self exploration is the Platonic ideal of pedagogical models. Moreover, the sustainability of the unbroken internal links ensure that a complementary source can always be found: creativity is nurtured, veracity is guaranteed.

The other day I was watching a fascinating documentary on Netflix called Lo and Behold” narrated by the hilariously idiosyncratic filmmaker, Werner Herzog when I came across Ted Nelson’s Xanadu project.

In his own words, Nelson describes Xanadu as “an entire form of literature where links do not break as versions change; where documents may be closely compared side by side and closely annotated; where it is possible to see the origins of every quotation; and in which there is a valid copyright system – a literary, legal and business arrangement – for frictionless, non-negotiated quotation at any time and in any amount.”

As with many things, this is best rendered through a visual representation:

http://xanadu.com/XanaduSpace/btf_files/fwDemoOrigins-panorama2.png

And even more compelling in my opinion through an interactive sample prototype:

http://xanadu.com/xanademos/MoeJusteOrigins.html

For me, there is an exciting and promising field encompassing the intersections of technological innovation and educational development that could take advantage of this particular type of information display.

Unfortunately, much of the learning software I have seen in classrooms is merely a computerized iteration of the traditional classroom. Whereas I believe that technology offers a uniquely personalized learning experience, it has thus far not been adopted as such.

Although the Xanadu project ultimately lost out to the scalable hierarchy of link breaking domains, I still think it presents a brilliant alternative way of pursuing knowledge.

While my more math and CS inclined friends tell me that Xanadu is no more than a humanity major’s pipe dream I still believe that a visual information projecting and tracing format similar to Xanadu could revolutionize the way in which we seek and verify information.

Why won’t programmers wear shoes?

Why won’t programmers wear shoes?

I’ve noticed this trend throughout Where Wizards Stay Up Late as well as in my real life observations of computer science majors. There is a peculiar mythology of the iconoclastic genius and his odd but tolerable behaviors.

Crowther insisted on only wearing sneakers. Einstein purportedly never wore socks.

My father, whom I consider to be a genius merely tolerates the bare minimum in grooming and will only ever replace tattered wardrobe staples if my mom and I demand it of him.

While I don’t think that it is coincidence that genius and eccentricity often align, I do find it notable that this particularly archetypal genius is almost exclusively masculine.

There are several reasons I have been thinking about as to why this might be true.

First, women in STEM historically have been barred from collaborative intellectual spaces with men both institutionally and culturally. As such, there are simply fewer notable female scientist who are known much less admired to the extent that their personal quirks are included in the mythology of their academic work.

Furthermore, the association between poor grooming and purported genius is premised on the notion that the pursuit of knowledge is allowed to eclipse the performance of vanity. The belief that “Real Programmers don’t wear neckties” is just one manifestation of this principle.

http://web.mit.edu/humor/Computers/real.programmers

The crux of this notion is societally enforced as inherently masculine. Insofar as women are expected to look a certain way in the workplace to maintain “professionalism” there is no room for a woman’s bare feet to be admired as indication of their intellectual vigor. Recent studies have found that women must carefully gauge their physical presentation through makeup simply to convey (not actually demonstrate) competence.

http://www.nytimes.com/2011/10/13/fashion/makeup-makes-women-appear-more-competent-study.html

There appears to be a uniquely pervasive bro-ishness throughout programmer culture that is continually reinforced by the individual egos and attitudes of those who hold such opinions and it more widely enforced through a paradigm that both implicitly and explicitly delegitimizes women’s work. However, this was not necessarily true throughout history. Vikram Chandra’s book Geek Sublime explains that the masculinization of the computer industry was a systematized effort that just so happened to end up prioritizing men who preferred not to be bothered with proper grooming (or shoes).

https://www.nytimes.com/2014/08/24/books/review/geek-sublime-by-vikram-chandra.html

While the role of women in early programming is often ignored or overlooked it was precisely this tidbit of historical precedence that encouraged me to take this class in the first place. In 1967 Cosmo Magazine advertised programming jobs to women by comparing writing code to the familiar domestic trope of planning a dinner. In its earliest conception the majority of coding was done by women. Much like the graduate students who were given the task of writing software, women were tasked with computational coding because their labor was cheaper. However, unlike the grad students whose work would eventually be acknowledged as groundbreaking by even the uninitiated public, the work of these women has been forgotten. https://www.washingtonpost.com/opinions/when-computer-programming-was-womens-work/2011/08/24/gIQAdixGgJ_story.html?utm_term=.7ae4409fd000

As programming became more relevant and respected, the process for entry became more elite and masculine. Despite this, women like Jean Sammet, who developed FORMAC and played an influential role in the creation of COBOL, continued to participate and contribute to a field where women were systematically underrepresented.

I often think about how culture functions to limit participation or encourage inclusion. I believe we are at the forefront of creating cultural and paradigmatic shifts that prioritize parity and value an egalitarian meritocracy. I like to think that years from now, women will be given credit for and remembered by their eccentric brilliance and their invaluable contribution regardless of whether they wear makeup or shoes or not.

Printers and Penguins

I would like to start by talking about printers. The other day I stood in the library printing paper copies of all the readings for each of my four classes plus one back-up class. As I watched the printer spit paper out at me for a solid two minutes I was reminded when I was as a child and after learning about the basics of carbon pollution earnestly believed that for each minute I held a refrigerator door open, a penguin would die and it would be my fault. Since coming to Harvard, I have learned that the effect is not quite as causative as I used to believe, I still felt guilty that my academic success would come at the expensive of robbing tree homes from woodland critters. Of course I could simply forgot the printing altogether. The modern academic institution makes it incredibly easy for students to access all necessary documents online. Market demand and technological innovation has rapidly democratized the availability of personal computers. The prevalence of personal computers has further pushed demand for internet infrastructure throughout the University. That I can use my MacBook whilst sitting in the middle of the yard is a rather unique feature of the focus and commitment to wireless accessibility. Despite the given that personal computers and network access are already changing the physical infrastructure of the university landscape, I believe that the cultural paradigmatic shift to a technology centered Academy has been too slow and implicit to allow adequate assessment and evaluation. I find this particularly interesting in the context of information access, academic and scholarly articles. Specifically, my position at a University institution gives me access to JSTOR and other pay-walled routes of information that would have been really nice to have prior to coming here. Either way, I think that the at some critical Point Harvard, and the Academy broadly will have to consider what what information will remain exclusive to the realm of the university educated. The books behind the velvet will always remain as such so long as the upper echelons of academics choose to keep it that way. With internet lectures, online libraries, and the vast global network of intellectual interconnected activity, I am left wondering what is it exactly about attending University in a physical sense will have on my development as a human.

But back to printers. I am afraid of killing penguins and I wish I wasn’t killing so many penguins but I also hate reading on computer. Aesthetically, it is impossible for back-lit blue screens to replace the tactile satisfaction of feeling smooth ink glide on soft paper. Libraries are happy, sacred places and the smell of old books will forever instill a hypnotizing charm in the hearts of bookworms. Harvard has some of the most beautiful, rare, precious books in the world. Maybe my privilege in attending this school is being able to feel the history of great intellectual giants in the weight of books in my hand and afterwards turning to my MacBook to tell the world what I have learned.