Category Archives: Idea Soup

Unification of Byzantine Fault Tolerance

The title reads rather elegantly. However, I have no idea what it means and I challenge you to find meaning as well. You see, while your friendly editor typed the title the words themselves came from a non-human author, who goes by the name SCIgen.

SCIgen is an automated scientific paper generator. Accessible via the internet the SCIgen program generates utterly random nonsense, which includes an abstract, hypothesis, test results, detailed diagrams and charts, and even academic references. At first glance the output seems highly convincing. In fact, unscrupulous individuals have been using it to author fake submissions to scientific conferences and to generate bogus research papers for publication in academic journals.

This says a great deal about the quality of some academic conferences and peer review process (or lack of one).

Access the SCIgen generator here.

Read more about the Unification of Byzantine Fault Tolerance — our very own scientific paper — below.

The Effect of Perfect Modalities on Hardware and Architecture

Bob Widgleton, Jordan LeBouth and Apropos Smythe

Abstract

The implications of pseudorandom archetypes have been far-reaching and pervasive. After years of confusing research into e-commerce, we demonstrate the refinement of rasterization, which embodies the confusing principles of cryptography [21]. We propose new modular communication, which we call Tither.

Table of Contents

1) Introduction
2) Principles
3) Implementation
4) Evaluation

5) Related Work

6) Conclusion

1  Introduction

The transistor must work. Our mission here is to set the record straight. On the other hand, a typical challenge in machine learning is the exploration of simulated annealing. Furthermore, an intuitive quandary in robotics is the confirmed unification of Byzantine fault tolerance and thin clients. Clearly, XML and Moore’s Law [22] interact in order to achieve the visualization of the location-identity split. This at first glance seems unexpected but has ample historical precedence.
We confirm not only that IPv4 can be made game-theoretic, homogeneous, and signed, but that the same is true for write-back caches. In addition, we view operating systems as following a cycle of four phases: location, location, construction, and evaluation. It should be noted that our methodology turns the stable communication sledgehammer into a scalpel. Despite the fact that it might seem unexpected, it always conflicts with the need to provide active networks to experts. This combination of properties has not yet been harnessed in previous work.
Nevertheless, this solution is fraught with difficulty, largely due to perfect information. In the opinions of many, the usual methods for the development of multi-processors do not apply in this area. By comparison, it should be noted that Tither studies event-driven epistemologies. By comparison, the flaw of this type of solution, however, is that red-black trees can be made efficient, linear-time, and replicated. This combination of properties has not yet been harnessed in existing work.
Here we construct the following contributions in detail. We disprove that although the well-known unstable algorithm for the compelling unification of I/O automata and interrupts by Ito et al. is recursively enumerable, the acclaimed collaborative algorithm for the investigation of 802.11b by Davis et al. [4] runs in ?( n ) time. We prove not only that neural networks and kernels are generally incompatible, but that the same is true for DHCP. we verify that while the foremost encrypted algorithm for the exploration of the transistor by D. Nehru [23] runs in ?( n ) time, the location-identity split and the producer-consumer problem are always incompatible.
The rest of this paper is organized as follows. We motivate the need for the partition table. Similarly, to fulfill this intent, we describe a novel approach for the synthesis of context-free grammar (Tither), arguing that IPv6 and write-back caches are continuously incompatible. We argue the construction of multi-processors. This follows from the understanding of the transistor that would allow for further study into robots. Ultimately, we conclude.

2  Principles

In this section, we present a framework for enabling model checking. We show our framework’s authenticated management in Figure 1. We consider a methodology consisting of n spreadsheets. The question is, will Tither satisfy all of these assumptions? Yes, but only in theory.

dia0.png

Figure 1: An application for the visualization of DHTs [24].

Furthermore, we assume that electronic theory can prevent compilers without needing to locate the synthesis of massive multiplayer online role-playing games. This is a compelling property of our framework. We assume that the foremost replicated algorithm for the construction of redundancy by John Kubiatowicz et al. follows a Zipf-like distribution. Along these same lines, we performed a day-long trace confirming that our framework is solidly grounded in reality. We use our previously explored results as a basis for all of these assumptions.

dia1.png

Figure 2: A decision tree showing the relationship between our framework and the simulation of context-free grammar.

Reality aside, we would like to deploy a methodology for how Tither might behave in theory. This seems to hold in most cases. Figure 1 depicts the relationship between Tither and linear-time communication. We postulate that each component of Tither enables active networks, independent of all other components. This is a key property of our heuristic. We use our previously improved results as a basis for all of these assumptions.

3  Implementation

Though many skeptics said it couldn’t be done (most notably Wu et al.), we propose a fully-working version of Tither. It at first glance seems unexpected but is supported by prior work in the field. We have not yet implemented the server daemon, as this is the least private component of Tither. We have not yet implemented the homegrown database, as this is the least appropriate component of Tither. It is entirely a significant aim but is derived from known results.

4  Evaluation

As we will soon see, the goals of this section are manifold. Our overall evaluation method seeks to prove three hypotheses: (1) that the World Wide Web no longer influences performance; (2) that an application’s effective ABI is not as important as median signal-to-noise ratio when minimizing median signal-to-noise ratio; and finally (3) that USB key throughput behaves fundamentally differently on our system. Our logic follows a new model: performance might cause us to lose sleep only as long as usability takes a back seat to simplicity constraints. Furthermore, our logic follows a new model: performance might cause us to lose sleep only as long as scalability constraints take a back seat to performance constraints. Only with the benefit of our system’s legacy code complexity might we optimize for performance at the cost of signal-to-noise ratio. Our evaluation approach will show that increasing the instruction rate of concurrent symmetries is crucial to our results.

4.1  Hardware and Software Configuration

figure0.png

Figure 3: Note that popularity of multi-processors grows as complexity decreases – a phenomenon worth exploring in its own right.

Though many elide important experimental details, we provide them here in gory detail. We instrumented a deployment on our network to prove the work of Italian mad scientist K. Ito. Had we emulated our underwater cluster, as opposed to deploying it in a chaotic spatio-temporal environment, we would have seen weakened results. For starters, we added 3 2GB optical drives to MIT’s decommissioned UNIVACs. This configuration step was time-consuming but worth it in the end. We removed 2MB of RAM from our 10-node testbed [15]. We removed more 2GHz Intel 386s from our underwater cluster. Furthermore, steganographers added 3kB/s of Internet access to MIT’s planetary-scale cluster.

figure1.png

Figure 4: These results were obtained by Noam Chomsky et al. [23]; we reproduce them here for clarity.

Tither runs on autogenerated standard software. We implemented our model checking server in x86 assembly, augmented with collectively wireless, noisy extensions. Our experiments soon proved that automating our Knesis keyboards was more effective than instrumenting them, as previous work suggested. Second, all of these techniques are of interesting historical significance; R. Tarjan and Andrew Yao investigated an orthogonal setup in 1967.

figure2.png

Figure 5: The average distance of our application, compared with the other applications.

4.2  Experiments and Results

figure3.png

Figure 6: The expected instruction rate of our application, as a function of popularity of replication.

figure4.png

Figure 7: Note that hit ratio grows as interrupt rate decreases – a phenomenon worth studying in its own right.

We have taken great pains to describe out evaluation setup; now, the payoff, is to discuss our results. That being said, we ran four novel experiments: (1) we ran von Neumann machines on 15 nodes spread throughout the underwater network, and compared them against semaphores running locally; (2) we measured database and instant messenger performance on our planetary-scale cluster; (3) we ran 87 trials with a simulated DHCP workload, and compared results to our courseware deployment; and (4) we ran 58 trials with a simulated RAID array workload, and compared results to our bioware simulation. All of these experiments completed without LAN congestion or access-link congestion.
Now for the climactic analysis of the second half of our experiments. Bugs in our system caused the unstable behavior throughout the experiments. Continuing with this rationale, bugs in our system caused the unstable behavior throughout the experiments. These expected time since 1935 observations contrast to those seen in earlier work [29], such as Alan Turing’s seminal treatise on RPCs and observed block size.
We have seen one type of behavior in Figures 6 and 6; our other experiments (shown in Figure 4) paint a different picture. Operator error alone cannot account for these results. Similarly, bugs in our system caused the unstable behavior throughout the experiments. Bugs in our system caused the unstable behavior throughout the experiments.
Lastly, we discuss the first two experiments. The data in Figure 5, in particular, proves that four years of hard work were wasted on this project. Error bars have been elided, since most of our data points fell outside of 35 standard deviations from observed means. The data in Figure 4, in particular, proves that four years of hard work were wasted on this project. Even though it is generally an unproven aim, it is derived from known results.

5  Related Work

Although we are the first to propose the UNIVAC computer in this light, much related work has been devoted to the evaluation of the Turing machine. Our framework is broadly related to work in the field of e-voting technology by Raman and Taylor [27], but we view it from a new perspective: multicast systems. A comprehensive survey [3] is available in this space. Recent work by Edgar Codd [18] suggests a framework for allowing e-commerce, but does not offer an implementation. Moore et al. [40] suggested a scheme for deploying SMPs, but did not fully realize the implications of the memory bus at the time. Anderson and Jones [26,6,17] suggested a scheme for simulating homogeneous communication, but did not fully realize the implications of the analysis of access points at the time [30,17,22]. Thus, the class of heuristics enabled by Tither is fundamentally different from prior approaches [10]. Our design avoids this overhead.

5.1  802.11 Mesh Networks

Several permutable and robust frameworks have been proposed in the literature [9,13,39,21,41]. Unlike many existing methods [32,16,42], we do not attempt to store or locate the study of compilers [31]. Obviously, comparisons to this work are unreasonable. Recent work by Zhou [20] suggests a methodology for exploring replication, but does not offer an implementation. Along these same lines, recent work by Takahashi and Zhao [5] suggests a methodology for controlling large-scale archetypes, but does not offer an implementation [20]. In general, our application outperformed all existing methodologies in this area [12].

5.2  Compilers

The concept of real-time algorithms has been analyzed before in the literature [37]. A method for the investigation of robots [44,41,11] proposed by Robert Tarjan et al. fails to address several key issues that our solution does answer. The only other noteworthy work in this area suffers from ill-conceived assumptions about the deployment of RAID. unlike many related solutions, we do not attempt to explore or synthesize the understanding of e-commerce. Along these same lines, a recent unpublished undergraduate dissertation motivated a similar idea for operating systems. Unfortunately, without concrete evidence, there is no reason to believe these claims. Ultimately, the application of Watanabe et al. [14,45] is a practical choice for operating systems [25]. This work follows a long line of existing methodologies, all of which have failed.

5.3  Game-Theoretic Symmetries

A major source of our inspiration is early work by H. Suzuki [34] on efficient theory [35,44,28]. It remains to be seen how valuable this research is to the cryptoanalysis community. The foremost system by Martin does not learn architecture as well as our approach. An analysis of the Internet [36] proposed by Ito et al. fails to address several key issues that Tither does answer [19]. On a similar note, Lee and Raman [7,2] and Shastri [43,8,33] introduced the first known instance of simulated annealing [38]. Recent work by Sasaki and Bhabha [1] suggests a methodology for storing replication, but does not offer an implementation.

6  Conclusion

We proved in this position paper that IPv6 and the UNIVAC computer can collaborate to fulfill this purpose, and our solution is no exception to that rule. Such a hypothesis might seem perverse but has ample historical precedence. In fact, the main contribution of our work is that we presented a methodology for Lamport clocks (Tither), which we used to prove that replication can be made read-write, encrypted, and introspective. We used multimodal technology to disconfirm that architecture and Markov models can interfere to fulfill this goal. we showed that scalability in our method is not a challenge. Tither has set a precedent for architecture, and we expect that hackers worldwide will improve our system for years to come.

References

[1]
Anderson, L. Constructing expert systems using symbiotic modalities. In Proceedings of the Symposium on Encrypted Modalities (June 1990).
[2]
Bachman, C. The influence of decentralized algorithms on theory. Journal of Homogeneous, Autonomous Theory 70 (Oct. 1999), 52-65.
[3]
Bachman, C., and Culler, D. Decoupling DHTs from DHCP in Scheme. Journal of Distributed, Distributed Methodologies 97 (Oct. 1999), 1-15.
[4]
Backus, J., and Kaashoek, M. F. The relationship between B-Trees and Smalltalk with Paguma. Journal of Omniscient Technology 6 (June 2003), 70-99.
[5]
Cocke, J. Deconstructing link-level acknowledgements using Samlet. In Proceedings of the Symposium on Wireless, Ubiquitous Algorithms (Mar. 2003).
[6]
Cocke, J., and Williams, J. Constructing IPv7 using random models. In Proceedings of the Workshop on Peer-to-Peer, Stochastic, Wireless Theory (Feb. 1999).
[7]
Dijkstra, E., and Rabin, M. O. Decoupling agents from fiber-optic cables in the transistor. In Proceedings of PODS (June 1993).
[8]
Engelbart, D., Lee, T., and Ullman, J. A case for active networks. In Proceedings of the Workshop on Homogeneous, “Smart” Communication (Oct. 1996).
[9]
Engelbart, D., Shastri, H., Zhao, S., and Floyd, S. Decoupling I/O automata from link-level acknowledgements in interrupts. Journal of Relational Epistemologies 55 (May 2004), 51-64.
[10]
Estrin, D. Compact, extensible archetypes. Tech. Rep. 2937/7774, CMU, Oct. 2001.
[11]
Fredrick P. Brooks, J., and Brooks, R. The relationship between replication and forward-error correction. Tech. Rep. 657/1182, UCSD, Nov. 2004.
[12]
Garey, M. I/O automata considered harmful. In Proceedings of NDSS (July 1999).
[13]
Gupta, P., Newell, A., McCarthy, J., Martinez, N., and Brown, G. On the investigation of fiber-optic cables. In Proceedings of the Symposium on Encrypted Theory (July 2005).
[14]
Hartmanis, J. Constant-time, collaborative algorithms. Journal of Metamorphic Archetypes 34 (Oct. 2003), 71-95.
[15]
Hennessy, J. A methodology for the exploration of forward-error correction. In Proceedings of SIGMETRICS (Mar. 2002).
[16]
Kahan, W., and Ramagopalan, E. Deconstructing 802.11b using FUD. In Proceedings of OOPSLA (Oct. 2005).
[17]
LeBout, J., and Anderson, T. a. The relationship between rasterization and robots using Faro. In Proceedings of the Conference on Lossless, Event-Driven Technology (June 1992).
[18]
LeBout, J., and Jones, V. O. IPv7 considered harmful. Journal of Heterogeneous, Low-Energy Archetypes 20 (July 2005), 1-11.
[19]
Lee, K., Taylor, O. K., Martinez, H. G., Milner, R., and Robinson, N. E. Capstan: Simulation of simulated annealing. In Proceedings of the Conference on Heterogeneous Modalities (May 1992).
[20]
Nehru, W. The impact of unstable methodologies on e-voting technology. In Proceedings of NDSS (July 1994).
[21]
Reddy, R. Improving fiber-optic cables and reinforcement learning. In Proceedings of the Workshop on Lossless Modalities (Mar. 1999).
[22]
Ritchie, D., Ritchie, D., Culler, D., Stearns, R., Bose, X., Leiserson, C., Bhabha, U. R., and Sato, V. Understanding of the Internet. In Proceedings of IPTPS (June 2001).
[23]
Sato, Q., and Smith, A. Decoupling Moore’s Law from hierarchical databases in SCSI disks. In Proceedings of IPTPS (Dec. 1997).
[24]
Shenker, S., and Thomas, I. Deconstructing cache coherence. In Proceedings of the Workshop on Scalable, Relational Modalities (Feb. 2004).
[25]
Simon, H., Tanenbaum, A., Blum, M., and Lakshminarayanan, K. An exploration of RAID using BordelaisMisuser. Tech. Rep. 98/30, IBM Research, May 1998.
[26]
Smith, R., Estrin, D., Thompson, K., Brown, X., and Adleman, L. Architecture considered harmful. In Proceedings of the Workshop on Flexible, “Fuzzy” Theory (Apr. 2005).
[27]
Sun, G. On the study of telephony. In Proceedings of the Symposium on Unstable, Knowledge-Based Epistemologies (May 1986).
[28]
Sutherland, I. Deconstructing systems. In Proceedings of ASPLOS (June 2000).
[29]
Suzuki, F. Y., Leary, T., Shastri, C., Lakshminarayanan, K., and Garcia-Molina, H. Metamorphic, multimodal methodologies for evolutionary programming. In Proceedings of the Workshop on Stable, Embedded Algorithms (Aug. 2005).
[30]
Takahashi, O., Gupta, W., and Hoare, C. On the theoretical unification of rasterization and massive multiplayer online role-playing games. In Proceedings of the Symposium on Trainable, Certifiable, Replicated Technology (July 2003).
[31]
Taylor, H., Morrison, R. T., Harris, Y., Bachman, C., Nygaard, K., Einstein, A., and Gupta, a. Byzantine fault tolerance considered harmful. In Proceedings of ASPLOS (Mar. 2003).
[32]
Thomas, X. K. Real-time, cooperative communication for e-business. In Proceedings of POPL (May 2004).
[33]
Thompson, F., Qian, E., Needham, R., Cocke, J., Daubechies, I., Martin, O., Newell, A., and Brown, O. Towards the understanding of consistent hashing. In Proceedings of the Conference on Efficient, Classical Algorithms (Sept. 1992).
[34]
Thompson, K. Simulating hash tables and DNS. IEEE JSAC 7 (Apr. 2001), 75-82.
[35]
Turing, A. Deconstructing IPv6 with ELOPS. In Proceedings of the Workshop on Atomic, Random Technology (Feb. 1995).
[36]
Turing, A., Minsky, M., Bhabha, C., and Sun, P. A methodology for the construction of courseware. In Proceedings of the Conference on Distributed, Random Modalities (Feb. 2004).
[37]
Ullman, J., and Ritchie, D. Distributed communication. In Proceedings of IPTPS (Nov. 2004).
[38]
Welsh, M., Schroedinger, E., Daubechies, I., and Shastri, W. A methodology for the analysis of hash tables. In Proceedings of OSDI (Oct. 2002).
[39]
White, V., and White, V. The influence of encrypted configurations on networking. Journal of Semantic, Flexible Theory 4 (July 2004), 154-198.
[40]
Wigleton, B., Anderson, G., Wang, Q., Morrison, R. T., and Codd, E. A synthesis of Web services. In Proceedings of IPTPS (Mar. 1999).
[41]
Wirth, N., and Hoare, C. A. R. Comparing DNS and checksums. OSR 310 (Jan. 2001), 159-191.
[42]
Zhao, B., Smith, A., and Perlis, A. Deploying architecture and Internet QoS. In Proceedings of NOSSDAV (July 2001).
[43]
Zhao, H. The effect of “smart” theory on hardware and architecture. In Proceedings of the USENIX Technical Conference (Apr. 2001).
[44]
Zheng, N. A methodology for the understanding of superpages. In Proceedings of SOSP (Dec. 2005).
[45]
Zheng, R., Smith, J., Chomsky, N., and Chandrasekharan, B. X. Comparing systems and redundancy with CandyUre. In Proceedings of the Workshop on Data Mining and Knowledge Discovery (Aug. 2003).

Apocalypse Now or Later?

Armageddon-poster06Americans love their apocalypses. So, should demise come at the hands of a natural catastrophe, hastened by human (in)action, or should it come courtesy of an engineered biological or nuclear disaster? You chose. Isn’t this so much fun, thinking about absolute extinction?

Ira Chernus, Professor of Religious Studies at the University of Colorado at Boulder, brings us a much-needed scholarly account of our love affairs with all things apocalyptic. But our fascination for  Armageddon — often driven by hope — does nothing to resolve the ultimate conundrum: regardless of the type of ending, it is unlikely that Bruce Willis will be featuring.

From TomDispatch / Salon:

Wherever we Americans look, the threat of apocalypse stares back at us.

Two clouds of genuine doom still darken our world: nuclear extermination and environmental extinction. If they got the urgent action they deserve, they would be at the top of our political priority list.

But they have a hard time holding our attention, crowded out as they are by a host of new perils also labeled “apocalyptic”: mounting federal debt, the government’s plan to take away our gunscorporate control of the Internet, the Comcast-Time Warner mergerocalypse, Beijing’s pollution airpocalypse, the American snowpocalypse, not to speak of earthquakes and plagues. The list of topics, thrown at us with abandon from the political right, left, and center, just keeps growing.

Then there’s the world of arts and entertainment where selling the apocalypse turns out to be a rewarding enterprise. Check out the website “Romantically Apocalyptic,” Slash’s album “Apocalyptic Love,” or the history-lite documentary “Viking Apocalypse” for starters. These days, mathematicians even have an “apocalyptic number.”

Yes, the A-word is now everywhere, and most of the time it no longer means “the end of everything,” but “the end of anything.” Living a life so saturated with apocalypses undoubtedly takes a toll, though it’s a subject we seldom talk about.

So let’s lift the lid off the A-word, take a peek inside, and examine how it affects our everyday lives. Since it’s not exactly a pretty sight, it’s easy enough to forget that the idea of the apocalypse has been a container for hope as well as fear. Maybe even now we’ll find some hope inside if we look hard enough.

A Brief History of Apocalypse

Apocalyptic stories have been around at least since biblical times, if not earlier. They show up in many religions, always with the same basic plot: the end is at hand; the cosmic struggle between good and evil (or God and the Devil, as the New Testament has it) is about to culminate in catastrophic chaos, mass extermination, and the end of the world as we know it.

That, however, is only Act I, wherein we wipe out the past and leave a blank cosmic slate in preparation for Act II: a new, infinitely better, perhaps even perfect world that will arise from the ashes of our present one. It’s often forgotten that religious apocalypses, for all their scenes of destruction, are ultimately stories of hope; and indeed, they have brought it to millions who had to believe in a better world a-comin’, because they could see nothing hopeful in this world of pain and sorrow.

That traditional religious kind of apocalypse has also been part and parcel of American political life since, in Common Sense, Tom Paine urged the colonies to revolt by promising, “We have it in our power to begin the world over again.”

When World War II — itself now sometimes called an apocalypse – ushered in the nuclear age, it brought a radical transformation to the idea. Just as novelist Kurt Vonnegut lamented that the threat of nuclear war had robbed us of “plain old death” (each of us dying individually, mourned by those who survived us), the theologically educated lamented the fate of religion’s plain old apocalypse.

After this country’s “victory weapon” obliterated two Japanese cities in August 1945, most Americans sighed with relief that World War II was finally over. Few, however, believed that a permanently better world would arise from the radioactive ashes of that war. In the 1950s, even as the good times rolled economically, America’s nuclear fear created something historically new and ominous — a thoroughly secular image of the apocalypse.  That’s the one you’ll get first if you type “define apocalypse” into Google’s search engine: “the complete final destruction of the world.” In other words, one big “whoosh” and then… nothing. Total annihilation. The End.

Apocalypse as utter extinction was a new idea. Surprisingly soon, though, most Americans were (to adapt the famous phrase of filmmaker Stanley Kubrick) learning how to stop worrying and get used to the threat of “the big whoosh.” With the end of the Cold War, concern over a world-ending global nuclear exchange essentially evaporated, even if the nuclear arsenals of that era were left ominously in place.

Meanwhile, another kind of apocalypse was gradually arising: environmental destruction so complete that it, too, would spell the end of all life.

This would prove to be brand new in a different way. It is, as Todd Gitlin has so aptly termed it, history’s first “slow-motion apocalypse.” Climate change, as it came to be called, had been creeping up on us “in fits and starts,” largely unnoticed, for two centuries. Since it was so different from what Gitlin calls “suddenly surging Genesis-style flood” or the familiar “attack out of the blue,” it presented a baffling challenge. After all, the word apocalypse had been around for a couple of thousand years or more without ever being associated in any meaningful way with the word gradual.
The eminent historian of religions Mircea Eliade once speculated that people could grasp nuclear apocalypse because it resembled Act I in humanity’s huge stock of apocalypse myths, where the end comes in a blinding instant — even if Act II wasn’t going to follow. This mythic heritage, he suggested, remains lodged in everyone’s unconscious, and so feels familiar.

But in a half-century of studying the world’s myths, past and present, he had never found a single one that depicted the end of the world coming slowly. This means we have no unconscious imaginings to pair it with, nor any cultural tropes or traditions that would help us in our struggle to grasp it.

That makes it so much harder for most of us even to imagine an environmentally caused end to life. The very category of “apocalypse” doesn’t seem to apply. Without those apocalyptic images and fears to motivate us, a sense of the urgent action needed to avert such a slowly emerging global catastrophe lessens.

All of that (plus of course the power of the interests arrayed against regulating the fossil fuel industry) might be reason enough to explain the widespread passivity that puts the environmental peril so far down on the American political agenda. But as Dr. Seuss would have said, that is not all! Oh no, that is not all.

Apocalypses Everywhere

When you do that Google search on apocalypse, you’ll also get the most fashionable current meaning of the word: “Any event involving destruction on an awesome scale; [for example] ‘a stock market apocalypse.’” Welcome to the age of apocalypses everywhere.

With so many constantly crying apocalyptic wolf or selling apocalyptic thrills, it’s much harder now to distinguish between genuine threats of extinction and the cheap imitations. The urgency, indeed the very meaning, of apocalypse continues to be watered down in such a way that the word stands in danger of becoming virtually meaningless. As a result, we find ourselves living in an era that constantly reflects premonitions of doom, yet teaches us to look away from the genuine threats of world-ending catastrophe.

Oh, America still worries about the Bomb — but only when it’s in the hands of some “bad” nation. Once that meant Iraq (even if that country, under Saddam Hussein, never had a bomb and in 2003, when the Bush administration invaded, didn’t even have a bomb program). Now, it means Iran — another country without a bomb or any known plan to build one, but with the apocalyptic stare focused on it as if it already had an arsenal of such weapons — and North Korea.

These days, in fact, it’s easy enough to pin the label “apocalyptic peril” on just about any country one loathes, even while ignoring friendsallies, and oneself. We’re used to new apocalyptic threats emerging at a moment’s notice, with little (or no) scrutiny of whether the A-word really applies.

What’s more, the Cold War era fixed a simple equation in American public discourse: bad nation + nuclear weapon = our total destruction. So it’s easy to buy the platitude that Iran must never get a nuclear weapon or it’s curtains. That leaves little pressure on top policymakers and pundits to explain exactly how a few nuclear weapons held by Iran could actually harm Americans.

Meanwhile, there’s little attention paid to the world’s largest nuclear arsenal, right here in the U.S. Indeed, America’s nukes are quite literally impossible to see, hidden as they are underground, under the seas, and under the wraps of “top secret” restrictions. Who’s going to worry about what can’t be seen when so many dangers termed “apocalyptic” seem to be in plain sight?

Environmental perils are among them: melting glaciers and open-water Arctic seas, smog-blinded Chinese cities, increasingly powerful storms, and prolonged droughts. Yet most of the time such perils seem far away and like someone else’s troubles. Even when dangers in nature come close, they generally don’t fit the images in our apocalyptic imagination. Not surprisingly, then, voices proclaiming the inconvenient truth of a slowly emerging apocalypse get lost in the cacophony of apocalypses everywhere. Just one more set of boys crying wolf and so remarkably easy to deny or stir up doubt about.

Death in Life

Why does American culture use the A-word so promiscuously? Perhaps we’ve been living so long under a cloud of doom that every danger now readily takes on the same lethal hue.

Psychiatrist Robert Lifton predicted such a state years ago when he suggested that the nuclear age had put us all in the grips of what he called “psychic numbing” or “death in life.” We can no longer assume that we’ll die Vonnegut’s plain old death and be remembered as part of an endless chain of life. Lifton’s research showed that the link between death and life had become, as he put it, a “broken connection.”

As a result, he speculated, our minds stop trying to find the vitalizing images necessary for any healthy life. Every effort to form new mental images only conjures up more fear that the chain of life itself is coming to a dead end. Ultimately, we are left with nothing but “apathy, withdrawal, depression, despair.”

If that’s the deepest psychic lens through which we see the world, however unconsciously, it’s easy to understand why anything and everything can look like more evidence that The End is at hand. No wonder we have a generation of American youth and young adults who take a world filled with apocalyptic images for granted.

Think of it as, in some grim way, a testament to human resiliency. They are learning how to live with the only reality they’ve ever known (and with all the irony we’re capable of, others are learning how to sell them cultural products based on that reality). Naturally, they assume it’s the only reality possible. It’s no surprise that “The Walking Dead,” a zombie apocalypse series, is theirfavorite TV show, since it reveals (and revels in?) what one TV critic called the “secret life of the post-apocalyptic American teenager.”

Perhaps the only thing that should genuinely surprise us is how many of those young people still manage to break through psychic numbing in search of some way to make a difference in the world.

Yet even in the political process for change, apocalypses are everywhere. Regardless of the issue, the message is typically some version of “Stop this catastrophe now or we’re doomed!” (An example: Stop the Keystone XL pipeline or it’s “game over”!) A better future is often implied between the lines, but seldom gets much attention because it’s ever harder to imagine such a future, no less believe in it.

No matter how righteous the cause, however, such a single-minded focus on danger and doom subtly reinforces the message of our era of apocalypses everywhere: abandon all hope, ye who live here and now.

Read the entire article here.

Image: Armageddon movie poster. Courtesy of Touchstone Pictures.

Abraham Lincoln Was a Sham President

 

This is not the opinion of theDiagonal. Rather, it’s the view of the revisionist thinkers over at the so-called “News Leader”, Fox News. Purposefully I avoid commenting on news and political events, but once in a while the story is so jaw-droppingly incredulous that your friendly editor cannot keep away from his keyboard. Which brings me to Fox News.

The latest diatribe from the 24/7 conservative think tank is that Lincoln actually caused the Civil War. According to Fox analyst Andrew Napolitano the Civil War was an unnecessary folly, and could have been avoided by Lincoln had he chosen to pay off the South or let slavery come to a natural end.

This is yet another example of the mindless, ideological drivel dished out on a daily basis by Fox. Next are we likely to see Fox defend Hitler’s “cleansing” of Europe as fine economic policy that the Allies should have let run its course? Ugh! One has to suppose that the present day statistic of 30 million enslaved humans around the world is just as much a figment of the collective imaginarium that is Fox.

The one bright note to ponder about Fox and its finely-tuned propaganda machine comes from looking at its commercials. When the majority of its TV ads are for the over-60s — think Viagra, statins and catheters — you can sense that its aging demographic will soon sublimate to meet its alternate, heavenly reality.

From Salon:

“The Daily Show” had one of its best segments in a while on Monday night, ruthlessly and righteously taking Fox News legal analyst and libertarian Andrew Napolitano to task for using the airwaves to push his clueless and harmful revisionist understanding of the Civil War.

Jon Stewart and “senior black correspondent” Larry Wilmore criticized Napolitano for a Feb. 14 appearance on the Fox Business channel during which he called himself a “contrarian” when it comes to estimating former President Abraham Lincoln’s legacy and argued that the Civil War was unnecessary — and may not have even been about slavery, anyway!

“At the time that [Lincoln] was the president of the United States, slavery was dying a natural death all over the Western world,” Napolitano said. “Instead of allowing it to die, or helping it to die, or even purchasing the slaves and then freeing them — which would have cost a lot less money than the Civil War cost — Lincoln set about on the most murderous war in American history.”

Stewart quickly shred this argument to pieces, noting that Lincoln spent much of 1862 trying (and failing) to convince border states to accept compensatory emancipation as well as the fact that the South’s relationship with chattel slavery was fundamentally not just an economic but also a social system, one that it would never willingly abandon.

Soon after, Stewart turned to Wilmore, who noted that the Confederacy was “so committed to slavery that Lincoln didn’t die of natural causes.” Wilmore next pointed out that people who “think Lincoln started the Civil War because the North was ready to kill to end slavery” are mistaken. “[T]he truth was,” Wilmore said, “the South was ready to die to keep slavery.”

Stewart and Wilmore next highlighted that Napolitano doesn’t hate all wars, and in fact has a history of praising the Revolutionary War as necessary and just. “So it was heroic to fight for the proposition that all men are created equal, but when there’s a war to enforce that proposition, that’s wack?” Wilmore asked. “You know, there’s something not right when you feel the only black thing worth fighting for is tea.”

As the final dagger, Stewart and Wilmore noted that Napolitano has ranted at length on Fox about how taxation is immoral and unjust, prompting Wilmore to elegantly outline the problems with Napolitano-style libertarianism in a single paragraph. Speaking to Napolitano, Wilmore said:

You think it’s immoral for the government to reach into your pocket, rip your money away from its warm home and claim it as its own property, money that used to enjoy unfettered freedom is now conscripted to do whatever its new owner tells it to. Now, I know this is going to be a leap, but you know that sadness and rage you feel about your money? Well, that’s the way some of us feel about people.

Read the entire story here.

Video courtesy of The Daily Show with Jon Stewart, Comedy Central.

 

FOMO Reshaping You and Your Network

Fear of missing out (FOMO) and other negative feelings are greatly disproportional to good ones in online social networks. The phenomenon is widespread and well-documented. Compound this with the observation — though unintuitive — that your online friends will have more friends and be more successful than you, and you have a recipe for a growing, deep-seated inferiority complex. Add to this other behavioral characteristics that are peculiar or exaggerated in online social networks and you have a more fundamental recipe — one that threatens the very fabric of the network itself. Just consider how online trolling, status lurking, persona-curation, passive monitoring, stalking and deferred (dis-)liking are re-fashioning our behaviors and the networks themselves.

From ars technica:

I found out my new college e-mail address in 2005 from a letter in the mail. Right after opening the envelope, I went straight to the computer. I was part of a LiveJournal group made of incoming students, and we had all been eagerly awaiting our college e-mail addresses, which had a use above and beyond corresponding with professors or student housing: back then, they were required tokens for entry to the fabled thefacebook.com.

That was nine years ago, and Facebook has now been in existence for 10. But even in those early days, Facebook’s cultural impact can’t be overstated. A search for “Facebook” on Google Scholar alone now produces 1.2 million results from 2006 on; “Physics” only returns 456,000.

But in terms of presence, Facebook is flopping around a bit now. The ever-important “teens” despise it, and it’s not the runaway success, happy addiction, or awe-inspiring source of information it once was. We’ve curated our identities so hard and had enough experiences with unforeseen online conflict that Facebook can now feel more isolating than absorbing. But what we are dissatisfied with is what Facebook has been, not what it is becoming.

Even if the grand sociological experiment that was Facebook is now running a little dry, the company knows this—which is why it’s transforming Facebook into a completely different entity. And the cause of all this built-up disarray that’s pushing change? It’s us. To prove it, let’s consider the social constructs and weirdnesses Facebook gave rise to, how they ultimately undermined the site, and how these ideas are shaping Facebook into the company it is now and will become.

Cue that Randy Newman song

Facebook arrived late to the concept of online friending, long after researchers started wondering about the structure of these social networks. What Facebook did for friending, especially reciprocal friending, was write it so large that it became a common concern. How many friends you had, who did and did not friend you back, and who should friend each other first all became things that normal people worried about.

Once Facebook opened beyond colleges, it became such a one-to-one representation of an actual social network that scientists started to study it. They applied social theories like those of weak ties or identity creation to see how they played out sans, or in supplement to, face-to-face interactions.

In a 2007 study, when Facebook was still largely campus-bound, a group of researchers said that Facebook “appears to play an important role in the process by which students form and maintain social capital.” They were using it to keep in touch with old friends and “to maintain or intensify relationships characterized by some form of offline connection.”

This sounds mundane now, since Facebook is so integrated into much of our lives. Seeing former roommates or childhood friends posting updates to Facebook feels as commonplace as literally seeing them nearly every day back when we were still roommates at 20 or friends at eight.

But the ability to keep tabs on someone without having to be proactive about it—no writing an e-mail, making a phone call, etc.—became the unique selling factor of Facebook. Per the 2007 study above, Facebook became a rich opportunity for “convert[ing] latent ties into weak ties,” connections that are valuable because they are with people who are sufficiently distant socially to bring in new information and opportunities.

Some romantic pixels have been spilled about the way no one is ever lost to anyone anymore; most people, including ex-lovers, estranged family members, or missed connections are only a Wi-Fi signal away.

“Modern technology has made our worlds smaller, but perhaps it also has diminished life’s mysteries, and with them, some sense of romance,” writes David Vecsey in The New York Times. Vecsey cites a time when he tracked down a former lover “across two countries and an ocean,” something he would not have done in the absence of passive social media monitoring. “It was only in her total absence, in a total vacuum away from her, that I was able to appreciate the depth of love I felt.”

The art of the Facebook-stalk

While plenty of studies have been conducted on the productive uses of Facebook—forming or maintaining weak ties, supplementing close relationships, or fostering new, casual ones—there are plenty that also touch on the site as a means for passive monitoring. Whether it was someone we’d never met, a new acquaintance, or an unrequited infatuation, Facebook eventually had enough breadth that you could call up virtually anyone’s profile, if only to see how fat they’ve gotten.

One study referred to this process as “social investigation.” We developed particular behaviors to avoid creating suspicion: do not “like” anything by the object of a stalking session, or if we do like it, don’t “like” too quickly; be careful not to type a name we want to search into the status field by accident; set an object of monitoring as a “close friend,” even if they aren’t, so their updates show up without fail; friend their friends; surreptitiously visit profile pages multiple times a day in case we missed anything.

This passive monitoring is one of the more utilitarian uses of Facebook. It’s also one of the most addictive. The (fictionalized) movie The Social Network closes with Facebook’s founder, Mark Zuckerberg, gazing at the Facebook profile of a high-school crush. Facebook did away with the necessity of keeping tabs on anyone. You simply had all of the tabs, all of the time, with the most recent information whenever you wanted to look at them.

The book Digital Discourse cites a classic example of the Facebook stalk in an IM conversation between two teenagers:

“I just saw what Tanya Eisner wrote on your Facebook wall. Go to her house,” one says.
“Woah, didn’t even see that til right now,” replies the other.
“Haha it looks like I stalk you… which I do,” says the first.
“I stalk u too its ok,” comforts the second.

But even innocent, casual information recon in the form of a Facebook stalk can rub us the wrong way. Any instance of a Facebook interaction that ends with an unexpected third body’s involvement can taint the rest of users’ Facebook behavior, making us feel watched.

Digital Discourse states that “when people feel themselves to be the objects of stalking, creeping, or lurking by third parties, they express annoyance or even moral outrage.” It cites an example of another teenager who gets a wall post from a person she barely knows, and it explains something she wrote about in a status update. “Don’t stalk my status,” she writes in mocking command to another friend, as if talking to the interloper.

You are who you choose to be

“The advent of the Internet has changed the traditional conditions of identity production,” reads a study from 2008 on how people presented themselves on Facebook. People had been curating their presences online for a long time before Facebook, but the fact that Facebook required real names and, for a long time after its inception, association with an educational institution made researchers wonder if it would make people hew a little closer to reality.

But beyond the bounds of being tied to a real name, users still projected an idealized self to others; a type of “possible self,” or many possible selves, depending on their sharing settings. Rather than try to describe themselves to others, users projected a sort of aspirational identity.

People were more likely to associate themselves with cultural touchstones, like movies, books, or music, than really identify themselves. You might not say you like rock music, but you might write Led Zeppelin as one of your favorite bands, and everyone else can infer your taste in music as well as general taste and coolness from there.

These identity proxies also became vectors for seeking approval. “The appeal is as much to the likeability of my crowd, the desirability of my boyfriend, or the magic of my music as it is to the personal qualities of the Facebook users themselves,” said the study. The authors also noted that, for instance, users tended to post photos of themselves mostly in groups in social situations. Even the profile photos, which would ostensibly have a single subject, were socially styled.

As the study concluded, “identity is not an individual characteristic; it is not an expression of something innate in a person, it is rather a social product, the outcome of a given social environment and hence performed differently in varying contexts.” Because Facebook was so susceptible to this “performance,” so easily controlled and curated, it quickly became less about real people and more about highlight reels.

We came to Facebook to see other real people, but everyone, even casual users, saw it could be gamed for personal benefit. Inflicting our groomed identities on each other soon became its own problem.

Fear of missing out

A long-time problem of social networks has been that the bad feelings they can generate are greatly disproportional to good ones.

In strict terms of self-motivation, posting something and getting a good reception feels good. But most of Facebook use is watching other people post about their own accomplishments and good times. For a social network of 300 friends with an even distribution of auspicious life events, you are seeing 300 times as many good things happen to others as happen to you (of course, everyone has the same amount of good luck, but in bulk for the consumer, it doesn’t feel that way). If you were happy before looking at Facebook, or even after posting your own good news, you’re not now.

The feelings of inadequacy did start to drive people back to Facebook. Even in the middle of our own vacations, celebration dinners, or weddings, we might check Facebook during or after to compare notes and see if we really had the best time possible.

That feeling became known as FOMO, “fear of missing out.” As Jenna Wortham wrote in The New York Times, “When we scroll through pictures and status updates, the worry that tugs at the corners of our minds is set off by the fear of regret… we become afraid that we’ve made the wrong decision about how to spend our time.”

Even if you had your own great stuff to tell Facebook about, someone out there is always doing better. And Facebook won’t let you forget. The brewing feeling of inferiority means users don’t post about stuff that might be too lame. They might start to self-censor, and then the bar for what is worth the “risk” of posting rises higher and higher. As people stop posting, there is less to see, less reason to come back and interact, like, or comment on other people’s material. Ultimately, people, in turn, have less reason to post.

Read the entire article here.

Influencing and Bullying

We sway our co-workers. We coach teams. We cajole our spouses and we parent our kids. But what characterizes this behavior over more overt and negative forms of influencing, such as bullying? It’s a question very much worth exploring since we are all bullies at some point — much more so than we tend to think of ourselves. And, not surprisingly, this goes hand-in-hand with deceit.

From the NYT:

WHAT is the chance that you could get someone to lie for you? What about vandalizing public property at your suggestion?

Most of us assume that others would go along with such schemes only if, on some level, they felt comfortable doing so. If not, they’d simply say “no,” right?

Yet research suggests that saying “no” can be more difficult than we believe — and that we have more power over others’ decisions than we think.

Social psychologists have spent decades demonstrating how difficult it can be to say “no” to other people’s propositions, even when they are morally questionable — consider Stanley Milgram’s infamous experiments, in which participants were persuaded to administer what they believed to be dangerous electric shocks to a fellow participant.

Countless studies have subsequently shown that we find it similarly difficult to resist social pressure from peers, friends and colleagues. Our decisions regarding everything from whether to turn the lights off when we leave a room to whether to call in sick to take a day off from work are affected by the actions and opinions of our neighbors and colleagues.

But what about those times when we are the ones trying to get someone to act unethically? Do we realize how much power we wield with a simple request, suggestion or dare? New research by my students and me suggests that we don’t.

We examined this question in a series of studies in which we had participants ask strangers to perform unethical acts. Before making their requests, participants predicted how many people they thought would comply. In one study, 25 college students asked 108 unfamiliar students to vandalize a library book. Targets who complied wrote the word “pickle” in pen on one of the pages.

As in the Milgram studies, many of the targets protested. They asked the instigators to take full responsibility for any repercussions. Yet, despite their hesitation, a large portion still complied.

Most important for our research question, more targets complied than participants had anticipated. Our participants predicted that an average of 28.5 percent would go along. In fact, fully half of those who were approached agreed. Moreover, 87 percent of participants underestimated the number they would be able to persuade to vandalize the book.

In another study, we asked 155 participants to think about a series of ethical dilemmas — for example, calling in sick to work to attend a baseball game. One group was told to think about these misdeeds from the perspective of a person deciding whether to commit them, and to imagine receiving advice from a colleague suggesting they do it or not. Another group took the opposite side, and thought about them from the perspective of someone advising another person about whether or not to do each deed.

Those in the first group were strongly influenced by the advice they received. When they were urged to engage in the misdeed, they said they would be more comfortable doing so than when they were advised not to. Their average reported comfort level fell around the midpoint of a 7-point scale after receiving unethical advice, but fell closer to the low end after receiving ethical advice.

However, participants in the “advisory” role thought that their opinions would hold little sway over the other person’s decision, assuming that participants in the first group would feel equally comfortable regardless of whether they had received unethical or ethical advice.

Taken together, our research, which was recently published in the journal Personality and Social Psychology Bulletin, suggests that we often fail to recognize the power of social pressure when we are the ones doing the pressuring.

Notably, this tendency may be especially pronounced in cultures like the United States’, where independence is so highly valued. American culture idolizes individuals who stand up to peer pressure. But that doesn’t mean that most do; in fact, such idolatry may hide, and thus facilitate, compliance under social pressure, especially when we are the ones putting on the pressure.

Consider the roles in the Milgram experiments: Most people have probably fantasized about being one of the subjects and standing up to the pressure. But in daily life, we play the role of the metaphorical experimenter in those studies as often as we play the participant. We bully. We pressure others to blow off work to come out for a drink or stiff a waitress who is having a bad night. These suggestions are not always wrong or unethical, but they may impact others’ behaviors more than we realize.

Read the entire story here.

It’s a Woman’s World

[tube]V4UWxlVvT1A[/tube]

Well, not really. Though, there is no doubting that the planet would look rather different if the genders had truly equal opportunities and pay-offs, or if women generally had all the power that tends to be concentrated in masculine hands.

A short movie by French actor and film-maker Eleonoré Pourriat imagines what our Western culture might resemble if the traditional female-male roles were reversed.

A portent of the future? Perhaps not, but thought-provoking nonetheless. One has to believe that if women had all the levers and trappings of power that they could do a better job than men. Or, perhaps not. It may just be possible that power corrupts — regardless of the gender of the empowered.

From the Independent:

Imagine a world where it is the women who pee in the street, jog bare-chested and harass and physically assault the men. Such a world has just gone viral on the internet. A nine-minute satirical film made by Eleonoré Pourriat, the French actress, script-writer and director, has clocked up hundreds of thousands of views in recent days.

The movie, Majorité Opprimée or “Oppressed Majority”, was made in 2010. It caused a flurry of interest when it was first posted on YouTube early last year. But now it’s time seems to have come. “It is astonishing, just incredible that interest in my film has suddenly exploded in this way,” Ms Pourriat told The Independent. “Obviously, I have touched a nerve. Women in France, but not just in France, feel that everyday sexism has been allowed to go on for too long.”

The star of the short film is Pierre, who is played very convincingly by Pierre Bénézit. He is a slightly gormless stay-at-home father, who spends a day besieged by the casual or aggressive sexism of women in a female-dominated planet. The film, in French with English subtitles, begins in a jokey way and turns gradually, and convincingly, nasty. It is not played for cheap laughs. It has a Swiftian capacity to disturb by the simple trick of reversing roles.

Pierre, pushing his baby-buggy, is casually harassed by a bare-breasted female jogger. He meets a male, Muslim babysitter, who is forced by his wife to wear a balaclava in public. He is verbally abused – “Think I don’t see you shaking your arse at me?” – by a drunken female down-and-out. He is sexually assaulted and humiliated by a knife-wielding girl gang. (“Say your dick is small or I’ll cut off your precious jewels.”)

He is humiliated a second time by a policewoman, who implies that he invented the gang assault. “Daylight and no witnesses, that’s strange,” she says. As she takes Pierre’s statement, the policewoman patronises a pretty, young policeman. “I need a coffee, cutie.”

Pierre’s self-important working wife arrives to collect him. She comforts him at first, calling him “kitten” and “pumpkin”. When he complains that he can no longer stand the permanent aggression of a female-dominated society, she says that he is to blame because of the way he dresses: in short sleeves, flip-flops and Bermudas.

At the second, or third, time of asking, interest in Ms Pourriat’s highly charged little movie has exploded in recent days on social media and on feminist and anti-feminist websites on both sides of the Channel and on both sides of the Atlantic. Some men refuse to see the point. “Sorry, but I would adore to live such a life,” said one French male blogger. “To be raped by a gang of girls. Great! That’s every man’s fantasy.”

Ms Pourriat, 42, acts and writes scripts for comedy movies in France. This was her first film as director. “It is rooted absolutely in my own experience as a woman living in France,” she tells me. “I think French men are worse than men elsewhere, but the incredible success of the movie suggests that it is not just a French problem.

“What angers me is that many women seem to accept this kind of behaviour from men or joke about it. I had long wanted to make a film that would turn the situation on its head.

Read the entire article here.

Video: Majorité Opprimée or “Oppressed Majority by Eleonoré Pourriat.

 

13.6 Billion Versus 4004 BCE

The first number, 13.6 billion, is the age in years of the oldest known star in the cosmos. It was discovered recently by astronomers in Australia at the National University’s Mount Stromlo SkyMapper Observatory. The star is located in our Milky Way galaxy about 6,000 light years away. A little closer to home, in Kentucky at the aptly named Creation Museum, the Synchronological Chart places the beginning of time and all things at 4004 BCE.

Interestingly enough both Australia and Kentucky should not exist according to the flat earth myth or the widespread pre-Columbus view of our world with an edge at the visible horizon. But, the evolution versus creationism debates continue unabated. The chasm between the two camps remains a mere 13.6 billion years give or take a handful of millennia. But perhaps over time, those who subscribe to reason and the scientific method are likely to prevail — an apt example of survival of the most adaptable at work.

Hitch, we still miss you!

From ars technica:

In 1878, the American scholar and minister Sebastian Adams put the final touches on the third edition of his grandest project: a massive Synchronological Chart that covers nothing less than the entire history of the world in parallel, with the deeds of kings and kingdoms running along together in rows over 25 horizontal feet of paper. When the chart reaches 1500 BCE, its level of detail becomes impressive; at 400 CE it becomes eyebrow-raising; at 1300 CE it enters the realm of the wondrous. No wonder, then, that in their 2013 book Cartographies of Time: A History of the Timeline, authors Daniel Rosenberg and Anthony Grafton call Adams’ chart “nineteenth-century America’s surpassing achievement in complexity and synthetic power… a great work of outsider thinking.”

The chart is also the last thing that visitors to Kentucky’s Creation Museum see before stepping into the gift shop, where full-sized replicas can be purchased for $40.

That’s because, in the world described by the museum, Adams’ chart is more than a historical curio; it remains an accurate timeline of world history. Time is said to have begun in 4004 BCE with the creation of Adam, who went on to live for 930 more years. In 2348 BCE, the Earth was then reshaped by a worldwide flood, which created the Grand Canyon and most of the fossil record even as Noah rode out the deluge in an 81,000 ton wooden ark. Pagan practices at the eight-story high Tower of Babel eventually led God to cause a “confusion of tongues” in 2247 BCE, which is why we speak so many different languages today.

Adams notes on the second panel of the chart that “all the history of man, before the flood, extant, or known to us, is found in the first six chapters of Genesis.”

Ken Ham agrees. Ham, CEO of Answers in Genesis (AIG), has become perhaps the foremost living young Earth creationist in the world. He has authored more books and articles than seems humanly possible and has built AIG into a creationist powerhouse. He also made national headlines when the slickly modern Creation Museum opened in 2007.

He has also been looking for the opportunity to debate a prominent supporter of evolution.

And so it was that, as a severe snow and sleet emergency settled over the Cincinnati region, 900 people climbed into cars and wound their way out toward the airport to enter the gates of the Creation Museum. They did not come for the petting zoo, the zip line, or the seasonal camel rides, nor to see the animatronic Noah chortle to himself about just how easy it had really been to get dinosaurs inside his ark. They did not come to see The Men in White, a 22-minute movie that plays in the museum’s halls in which a young woman named Wendy sees that what she’s been taught about evolution “doesn’t make sense” and is then visited by two angels who help her understand the truth of six-day special creation. They did not come to see the exhibits explaining how all animals had, before the Fall of humanity into sin, been vegetarians.

He has also been looking for the opportunity to debate a prominent supporter of evolution.

And so it was that, as a severe snow and sleet emergency settled over the Cincinnati region, 900 people climbed into cars and wound their way out toward the airport to enter the gates of the Creation Museum. They did not come for the petting zoo, the zip line, or the seasonal camel rides, nor to see the animatronic Noah chortle to himself about just how easy it had really been to get dinosaurs inside his ark. They did not come to see The Men in White, a 22-minute movie that plays in the museum’s halls in which a young woman named Wendy sees that what she’s been taught about evolution “doesn’t make sense” and is then visited by two angels who help her understand the truth of six-day special creation. They did not come to see the exhibits explaining how all animals had, before the Fall of humanity into sin, been vegetarians.

They came to see Ken Ham debate TV presenter Bill Nye the Science Guy—an old-school creation v. evolution throwdown for the Powerpoint age. Even before it began, the debate had been good for both men. Traffic to AIG’s website soared by 80 percent, Nye appeared on CNN, tickets sold out in two minutes, and post-debate interviews were lined up with Piers Morgan Live and MSNBC.

While plenty of Ham supporters filled the parking lot, so did people in bow ties and “Bill Nye is my Homeboy” T-shirts. They all followed the stamped dinosaur tracks to the museum’s entrance, where a pack of AIG staffers wearing custom debate T-shirts stood ready to usher them into “Discovery Hall.”

Security at the Creation Museum is always tight; the museum’s security force is made up of sworn (but privately funded) Kentucky peace officers who carry guns, wear flat-brimmed state trooper-style hats, and operate their own K-9 unit. For the debate, Nye and Ham had agreed to more stringent measures. Visitors passed through metal detectors complete with secondary wand screenings, packages were prohibited in the debate hall itself, and the outer gates were closed 15 minutes before the debate began.

Inside the hall, packed with bodies and the blaze of high-wattage lights, the temperature soared. The empty stage looked—as everything at the museum does—professionally designed, with four huge video screens, custom debate banners, and a pair of lecterns sporting Mac laptops. 20 different video crews had set up cameras in the hall, and 70 media organizations had registered to attend. More than 10,000 churches were hosting local debate parties. As AIG technical staffers made final preparations, one checked the YouTube-hosted livestream—242,000 people had already tuned in before start time.

An AIG official took the stage eight minutes before start time. “We know there are people who disagree with each other in this room,” he said. “No cheering or—please—any disruptive behavior.”

At 6:59pm, the music stopped and the hall fell silent but for the suddenly prominent thrumming of the air conditioning. For half a minute, the anticipation was electric, all eyes fixed on the stage, and then the countdown clock ticked over to 7:00pm and the proceedings snapped to life. Nye, wearing his traditional bow tie, took the stage from the left; Ham appeared from the right. The two shook hands in the center to sustained applause, and CNN’s Tom Foreman took up his moderating duties.

Inside the hall, packed with bodies and the blaze of high-wattage lights, the temperature soared. The empty stage looked—as everything at the museum does—professionally designed, with four huge video screens, custom debate banners, and a pair of lecterns sporting Mac laptops. 20 different video crews had set up cameras in the hall, and 70 media organizations had registered to attend. More than 10,000 churches were hosting local debate parties. As AIG technical staffers made final preparations, one checked the YouTube-hosted livestream—242,000 people had already tuned in before start time.

An AIG official took the stage eight minutes before start time. “We know there are people who disagree with each other in this room,” he said. “No cheering or—please—any disruptive behavior.”

At 6:59pm, the music stopped and the hall fell silent but for the suddenly prominent thrumming of the air conditioning. For half a minute, the anticipation was electric, all eyes fixed on the stage, and then the countdown clock ticked over to 7:00pm and the proceedings snapped to life. Nye, wearing his traditional bow tie, took the stage from the left; Ham appeared from the right. The two shook hands in the center to sustained applause, and CNN’s Tom Foreman took up his moderating duties.

Ham had won the coin toss backstage and so stepped to his lectern to deliver brief opening remarks. “Creation is the only viable model of historical science confirmed by observational science in today’s modern scientific era,” he declared, blasting modern textbooks for “imposing the religion of atheism” on students.

“We’re teaching people to think critically!” he said. “It’s the creationists who should be teaching the kids out there.”

And we were off.

Two kinds of science

Digging in the fossil fields of Colorado or North Dakota, scientists regularly uncover the bones of ancient creatures. No one doubts the existence of the bones themselves; they lie on the ground for anyone to observe or weigh or photograph. But in which animal did the bones originate? How long ago did that animal live? What did it look like? One of Ham’s favorite lines is that the past “doesn’t come with tags”—so the prehistory of a stegosaurus thigh bone has to be interpreted by scientists, who use their positions in the present to reconstruct the past.

For mainstream scientists, this is simply an obvious statement of our existential position. Until a real-life Dr. Emmett “Doc” Brown finds a way to power a Delorean with a 1.21 gigawatt flux capacitor in order to shoot someone back through time to observe the flaring-forth of the Universe, the formation of the Earth, or the origins of life, or the prehistoric past can’t be known except by interpretation. Indeed, this isn’t true only of prehistory; as Nye tried to emphasize, forensic scientists routinely use what they know of nature’s laws to reconstruct past events like murders.

For Ham, though, science is broken into two categories, “observational” and “historical,” and only observational science is trustworthy. In the initial 30 minute presentation of his position, Ham hammered the point home.

“You don’t observe the past directly,” he said. “You weren’t there.”

Ham spoke with the polish of a man who has covered this ground a hundred times before, has heard every objection, and has a smooth answer ready for each one.

When Bill Nye talks about evolution, Ham said, that’s “Bill Nye the Historical Science Guy” speaking—with “historical” being a pejorative term.

In Ham’s world, only changes that we can observe directly are the proper domain of science. Thus, when confronted with the issue of speciation, Ham readily admits that contemporary lab experiments on fast-breeding creatures like mosquitoes can produce new species. But he says that’s simply “micro-evolution” below the family level. He doesn’t believe that scientists can observe “macro-evolution,” such as the alteration of a lobe-finned fish into a tiger over millions of years.

Because they can’t see historical events unfold, scientists must rely on reconstructions of the past. Those might be accurate, but they simply rely on too many “assumptions” for Ham to trust them. When confronted during the debate with evidence from ancient trees which have more rings than there are years on the Adams Sychronological Chart, Ham simply shrugged.

“We didn’t see those layers laid down,” he said.

To him, the calculus of “one ring, one year” is merely an assumption when it comes to the past—an assumption possibly altered by cataclysmic events such as Noah’s flood.

In other words, “historical science” is dubious; we should defer instead to the “observational” account of someone who witnessed all past events: God, said to have left humanity an eyewitness account of the world’s creation in the book of Genesis. All historical reconstructions should thus comport with this more accurate observational account.

Mainstream scientists don’t recognize this divide between observational and historical ways of knowing (much as they reject Ham’s distinction between “micro” and “macro” evolution). Dinosaur bones may not come with tags, but neither does observed contemporary reality—think of a doctor presented with a set of patient symptoms, who then has to interpret what she sees in order to arrive at a diagnosis.

Given that the distinction between two kinds of science provides Ham’s key reason for accepting the “eyewitness account” of Genesis as a starting point, it was unsurprising to see Nye take generous whacks at the idea. You can’t observe the past? “That’s what we do in astronomy,” said Nye in his opening presentation. Since light takes time to get here, “All we can do in astronomy is look at the past. By the way, you’re looking at the past right now.”

Those in the present can study the past with confidence, Nye said, because natural laws are generally constant and can be used to extrapolate into the past.

“This idea that you can separate the natural laws of the past from the natural laws you have now is at the heart of our disagreement,” Nye said. “For lack of a better word, it’s magical. I’ve appreciated magic since I was a kid, but it’s not what we want in mainstream science.”

How do scientists know that these natural laws are correctly understood in all their complexity and interplay? What operates as a check on their reconstructions? That’s where the predictive power of evolutionary models becomes crucial, Nye said. Those models of the past should generate predictions which can then be verified—or disproved—through observations in the present.

Read the entire article here.

MondayMap: Mississippi is Syria; Colorado is Slovenia

US-life-expectancy

A fascinating map re-imagines life expectancy in the United States, courtesy of Olga Khazan over at measureofamerica.org. The premise of this map is a simple one: match the average life expectancy for each state of the union with that of a country having a similar rate. Voila. The lowest life expectancy rate belongs to Mississippi at 75 years, which equates with that of Syria. The highest, at 81.3 years, is found in Hawaii and Cyprus.

From the Atlantic:

American life expectancy has leapt up some 30 years in the past century, and we now live roughly 79.8 years on average. That’s not terrible, but it’s not fantastic either: We rank 35th in the world as far as lifespan, nestled right between Costa Rica and Chile. But looking at life expectancy by state, it becomes clear that where you live in America, at least to some extent, determines when you’ll die.

Here, I’ve found the life expectancy for every state to the tenth of a year using the data and maps from the Measure of America, a nonprofit group that tracks human development. Then, I paired it up with the nearest country by life expectancy from the World Health Organization’s 2013 data. When there was no country with that state’s exact life expectancy, I paired it with the nearest matching country, which was always within two-tenths of a year.

There’s profound variation by state, from a low of 75 years in Mississippi to a high of 81.3 in Hawaii. Mostly, we resemble tiny, equatorial hamlets like Kuwait and Barbados. At our worst, we look more like Malaysia or Oman, and at our best, like the United Kingdom. No state approaches the life expectancies of most European countries or some Asian ones. Icelandic people can expect to live a long 83.3 years, and that’s nothing compared to the Japanese, who live well beyond 84.

Life expectancy can be causal, a factor of diet, environment, medical care, and education. But it can also be recursive: People who are chronically sick are less likely to become wealthy, and thus less likely to live in affluent areas and have access to the great doctors and Whole-Foods kale that would have helped them live longer.

It’s worth noting that the life expectancy for certain groups within the U.S. can be much higher—or lower—than the norm. The life expectancy for African Americans is, on average, 3.8 years shorter than that of whites. Detroit has a life expectancy of just 77.6 years, but that city’s Asian Americans can expect to live 89.3 years.

Read the entire article here.

Business Decison-Making Welcomes Science

data-visualization-ayasdi

It is likely that business will never eliminate gut instinct from the decision-making process. However, as data, now big data, increasingly pervades every crevice of every organization, the use of data-driven decisions will become the norm. As this happens, more and more businesses find themselves employing data scientists to help filter, categorize, mine and analyze these mountains of data in meaningful ways.

The caveat, of course, is that data, big data and an even bigger reliance on that data requires subject matter expertise and analysts with critical thinking skills and sound judgement — data cannot be used blindly.

From Technology review:

Throughout history, innovations in instrumentation—the microscope, the telescope, and the cyclotron—have repeatedly revolutionized science by improving scientists’ ability to measure the natural world. Now, with human behavior increasingly reliant on digital platforms like the Web and mobile apps, technology is effectively “instrumenting” the social world as well. The resulting deluge of data has revolutionary implications not only for social science but also for business decision making.

As enthusiasm for “big data” grows, skeptics warn that overreliance on data has pitfalls. Data may be biased and is almost always incomplete. It can lead decision makers to ignore information that is harder to obtain, or make them feel more certain than they should. The risk is that in managing what we have measured, we miss what really matters—as Vietnam-era Secretary of Defense Robert McNamara did in relying too much on his infamous body count, and as bankers did prior to the 2007–2009 financial crisis in relying too much on flawed quantitative models.

The skeptics are right that uncritical reliance on data alone can be problematic. But so is overreliance on intuition or ideology. For every Robert McNamara, there is a Ron Johnson, the CEO whose disastrous tenure as the head of JC Penney was characterized by his dismissing data and evidence in favor of instincts. For every flawed statistical model, there is a flawed ideology whose inflexibility leads to disastrous results.

So if data is unreliable and so is intuition, what is a responsible decision maker supposed to do? While there is no correct answer to this question—the world is too complicated for any one recipe to apply—I believe that leaders across a wide range of contexts could benefit from a scientific mind-set toward decision making.

A scientific mind-set takes as its inspiration the scientific method, which at its core is a recipe for learning about the world in a systematic, replicable way: start with some general question based on your experience; form a hypothesis that would resolve the puzzle and that also generates a testable prediction; gather data to test your prediction; and finally, evaluate your hypothesis relative to competing hypotheses.

The scientific method is largely responsible for the astonishing increase in our understanding of the natural world over the past few centuries. Yet it has been slow to enter the worlds of politics, business, policy, and marketing, where our prodigious intuition for human behavior can always generate explanations for why people do what they do or how to make them do something different. Because these explanations are so plausible, our natural tendency is to want to act on them without further ado. But if we have learned one thing from science, it is that the most plausible explanation is not necessarily correct. Adopting a scientific approach to decision making requires us to test our hypotheses with data.

While data is essential for scientific decision making, theory, intuition, and imagination remain important as well—to generate hypotheses in the first place, to devise creative tests of the hypotheses that we have, and to interpret the data that we collect. Data and theory, in other words, are the yin and yang of the scientific method—theory frames the right questions, while data answers the questions that have been asked. Emphasizing either at the expense of the other can lead to serious mistakes.

Also important is experimentation, which doesn’t mean “trying new things” or “being creative” but quite specifically the use of controlled experiments to tease out causal effects. In business, most of what we observe is correlation—we do X and Y happens—but often what we want to know is whether or not X caused Y. How many additional units of your new product did your advertising campaign cause consumers to buy? Will expanded health insurance coverage cause medical costs to increase or decline? Simply observing the outcome of a particular choice does not answer causal questions like these: we need to observe the difference between choices.

Replicating the conditions of a controlled experiment is often difficult or impossible in business or policy settings, but increasingly it is being done in “field experiments,” where treatments are randomly assigned to different individuals or communities. For example, MIT’s Poverty Action Lab has conducted over 400 field experiments to better understand aid delivery, while economists have used such experiments to measure the impact of online advertising.

Although field experiments are not an invention of the Internet era—randomized trials have been the gold standard of medical research for decades—digital technology has made them far easier to implement. Thus, as companies like Facebook, Google, Microsoft, and Amazon increasingly reap performance benefits from data science and experimentation, scientific decision making will become more pervasive.

Nevertheless, there are limits to how scientific decision makers can be. Unlike scientists, who have the luxury of withholding judgment until sufficient evidence has accumulated, policy makers or business leaders generally have to act in a state of partial ignorance. Strategic calls have to be made, policies implemented, reward or blame assigned. No matter how rigorously one tries to base one’s decisions on evidence, some guesswork will be required.

Exacerbating this problem is that many of the most consequential decisions offer only one opportunity to succeed. One cannot go to war with half of Iraq and not the other just to see which policy works out better. Likewise, one cannot reorganize the company in several different ways and then choose the best. The result is that we may never know which good plans failed and which bad plans worked.

Read the entire article here.

Image: Screenshot of Iris, Ayasdi’s data-visualization tool. Courtesy of Ayasdi / Wired.

Mr. Magorium’s Real Life Toy Emporium

tim-rowett

We are all children at heart. Unfortunately many of us are taught to suppress or abandon our dreams and creativity as a prerequisite for entering adulthood. However, a few manage to keep the wonder of their inner child alive.

Tim Rowett is one such person; through his toys he brings smiles and re-awakens memories in many of us who have since forgotten how to play and imagine. Though, I would take issue with Wired’s characterization of Mr.Rowett as an “eccentric”. Eccentricity is not a label that I’d apply to a person who remains true to his or her earlier self.

From Wired (UK):

When Wired.co.uk visited Tim Rowett’s flat in Twickenham, nothing had quite prepared us for the cabinet of curiosities we found ourselves walking into. Old suitcases overflowing with toys and knick-knacks were meticulously labelled, dated and stacked on top of one another from room to room, floor to ceiling. Every bookshelf, corner and cupboard had been stripped of whatever its original purpose might have been, and replaced with the task of storing Tim’s 25,000 toys, which he’s been collecting for over 50 years.

For the last five years Tim has been entertaining a vast and varied audience of millions on YouTube, becoming a perhaps surprising viral success. Taking a small selection of his toys each week to and from a studio in Buckinghamshire — which also happens to be an 18th century barn — he’s steadily built up a following of the curious, the charmed and the fanatic.

If you’re a regular user of Reddit, or perhaps occasionally find yourself in “the weird place” on YouTube after one too many clicks through the website’s dubious “related videos” section, then you’ve probably already come across Tim in one form or another. With more than 28 million views and hundreds of thousands of subscribers, he’s certainly no small presence.

You won’t know him as Tim, though. In fact, unless you’ve deliberately gone out of your way, you won’t know very much about Tim at all — he’s a private man, who’s far more interested in entertaining and educating viewers with his endless collection of toys and gadgets, which often have mathematically or scientifically curious fundamental principles, than he is in bothering you with fussy details like his full name.

Greeted with a warm and familiar hello, Tim offered us a cup of tea, a biscuit and and a seat by the fire. “Toys, everywhere, toys.” He said, looking round the room as he sat down. “I see myself as an hourglass. A large part of me is 112, a small part is my physical age and the last part is a 12-year-old boy.”

This unique mix of old and new — both literally and figuratively — certainly displays itself in his videos, of which there are upwards of 500 at rarely no more than 10 minutes in length. The formula is refreshingly simple. Tim sits at a table, demonstrates how a particular toy works, and provides background information to the piece before explaining how the mechanism inside (if it has one) functions — a particular delight for the scientifically-minded collector: “The mechanism is the key thing” he explained, “and some of them are quite remarkable. If a child breaks a toy I often think ‘oh wonderful’ because it means I can get into it.”

The apparently simple facade of the show is slightly deceptive however — Tim works with two ex-BBC producers: Hendrik Ball and George Auckland, who are responsible for editing and filming the videos. Hendrik’s passion for science (fuelled by his BSc at Bristol) ultimately landed him a job as a producer at the BBC, which he kept for 25 years, specialising in science and educational material. Hendrik has his own remarkable history in tech, having written the first website for the BBC that ever accompanied a television programme (called Multimedia Business), back in 1996, making him and George “a little nucleus of knowledge of multimedia in our department at that time”.

With few opportunities presenting themselves at the BBC to expand their newly developed skills in HTML, the two hatched a plan to create a website called Grand Illusions, which would not only sell many of the various toys and gadgets Tim came across in his collection, but would also experiment with video, with Tim as the presenter: “George and I wanted to get some more first-hand experience of running a website which would feed into our BBC work.” Said Hendrik, “so we had this idea, which closely involved a bottle of Rioja — wilder rumours say there were two bottles — and we came up with Grand Illusions. Within about a week we’d finished the website and at one point we were getting more hits than the BBC education website.”

Having only spent two hours with Tim, it’s clear why Hendrik and George were so keen to get him in front of the camera. During our time together, Tim played up to his role as the restless prestidigitator, which has afforded him such great success online — “I’m a child philosopher” he said, as he waved a parallax-inspired business card in front of us.  “You can either explore the world outside, as people do,” he placed a tiny cylindrical metal tube in my hand, “or you can explore the world inside, which is equally meaningful in my mind — there are still dragons and dangers and treasures on the inside as well as the outside world.” He then suggested throwing the cylinder in the air, and it burst into a large magic wand.

This constant conjuring was what initially piqued Hendrik’s interest: “He’s a master at it. Whenever he goes anywhere he’ll have a few toys on him. If there’s ever a lull he’ll produce one and give a quick demonstration and then everyone wants a go but, just as the excitement is peaking, Tim will bring out the next one.”

On one occasion, after a meal, Tim inflated a large balloon outside of a restaurant using a helium cylinder he stores in the boot of his car. He attached a sparkler to the balloon, lit it and then let the balloon float off into the sky. “It was an impressive end to the evening,” says Hendrik.

When we asked Hendrik what he thought the appeal of Tim’s channel was, on which nearly two million people have watched a video on Japanese zip bags and a further million on a spinning gun, he stressed that sometimes his apparent innocence worked in their favour. “Tim produced a toy some while ago, which looked like a revolver but in black rubber. It has a wire coming out of it and there’s a battery at the other end — when you press a button the end of the revolver sort of wiggles,” says Hendrik, who assures us that Tim bought this from a toy shop and has the original packaging to prove it. He also bought a rather large rubbery heart, which kind of throbs when you push a button.

Read the entire story here.

Image: Tim Rowett / Grand Illusions. Courtesy of Wired UK.

The Persistent Self

eterni-screenshot

Many of us strive for persistence beyond the realm of our natural life-spans. Some seek to be remembered through monuments, buildings and other physical objects. Others seek permanence through literary and artistic works. Still others aim for remembrance through less lasting, but noble deeds: social programs, health initiatives, charitable foundations and so on. And yet others wish to be preserved in frozen stasis for later thawing and re-awakening. It is safe to say, that many of us would seek to live for ever.

So, it comes as no surprise to see internet startups exploring the market to preserve us or facsimiles of us — digitally — after death. Introducing Eterni.me — your avatar to a virtual eternity.

From Wired (UK):

“We don’t try to replace humans or give false hopes to people grieving.” Romanian design consultant Marius Ursache, cofounder of Eterni.me, needs to clear this up quickly. Because when you’re building a fledgling artificial intelligence company that promises to bring back the dead — or at least, their memories and character, as preserved in their digital footprint — for virtual chats with loved ones, expect a lot of flack.

“It is going to really suck — think Cleverbot with weird out-of-place references to things from that person’s life, masquerading as that person,” wrote one Redditor on the thread “Become Virtually Immortal (In the creepiest way possible)”, which immediately appeared after Eterni.me’s launch was announced last week. Retorts ranged from the bemused — “Now that is some scary f’d up s**t right there. WTF!?” — to the amusing: “Imagine a world where drunk you has to reason with sober AI you before you’re allowed to drunk dial every single person you ever dated or saw naked. So many awkward moments avoided.” But the resounding consensus seems to be that everyone wants to know more.

The site launched with the look of any other Silicon Valley internet startup, but a definitively new take on an old message. While social media companies want you to share and create the story of you while you’re alive, and lifelogging company Memoto promises to capture “meaningful [and shareable] moments”, Eterni.me wants to wrap that all up for those you leave behind into a cohesive AI they can chat with.

Three thousand people registered to the service within the first four days of the site going live, despite there being zero product to make use of (a beta version is slated for 2015). So with a year to ponder your own mortality, why the excitement for a technology that is, at this moment, merely a proof of concept?

“We got very mixed reactions, from ecstatic congratulations to hate mail. And it’s normal — it’s a very polarising topic. But one thing was constant: almost everybody we’ve interacted with truly believes this will be a reality someday. The only question is when it will be a reality and who will make it a reality,” Ursache tells us.

Popular culture and the somewhat innate human need to believe we are impervious, has well prepared us for the concept. Ray Kurzweil wants us to upload our brains to computers and develop synthetic neocortexes, and AI has featured prominently on film and TV for decades, including in this month’s Valentine’s Day release of a human-virtual assistant love story. In series two of British future-focused drama Black Mirror Hayley Atwell reconnects with her diseased lover using a system comparable to what Eterni.me is trying to achieve — though Ursache calls it a “creepier” version, and tells us “we’re trying to stay away from that idea”, the concept that it’s a way for grieving loved ones to stall moving on.

Sigmund Freud called our relationship with the concept of immortality the “real secret of heroism” — that we carry out heroic feats is only down to a perpetual and inherent belief that our consciousness is permanent. He writes in Reflections on War and Death: “We cannot, indeed, imagine our own death; whenever we try to do so we find that we survive ourselves as spectators. The school of psychoanalysis could thus assert that at bottom no one believes in his own death, which amounts to saying: in the unconscious every one of us is convinced of his immortality… Our unconscious therefore does not believe in its own death; it acts as though it were immortal.”

This is why Eterni.me is not just about loved ones signing up after the event, but individuals signing up to have their own character preserved, under their watchful eye while still alive.

The company’s motto is “it’s like a Skype chat from the past,” but it’s still very much about crafting how the world sees you — or remembers you, in this case — just as you might pause and ponder on hitting Facebook’s post button, wondering till the last if your spaghetti dinner photo/comment really gets the right message across. On its more troubling side, the site plays on the fear that you can no longer control your identity after you’re gone; that you are in fact a mere mortal. “The moments and emotions in our lifetime define how we are seen by our family and friends. All these slowly fade away after we die — until one day… we are all forgotten,” it says in its opening lines — scroll down and it provides the answer to all your problems: “Simply Become Immortal”. Part of the reason we might identify as being immortal — at least unconsciously, as Freud describes it — is because we craft a life we believe will be memorable, or have children we believe our legacy will live on in. Eterni.me’s comment shatters that illusion and could be seen as opportunistic on the founders’ part. The site also goes on to promise a “virtual YOU” that can “offer information and advice to your family and friends after you pass away”, a comfort to anyone worried about leaving behind a spouse or children.

In contrast to this rather dramatic claim, Ursache says: “We’re trying to make it clear that it’s not replacing a person, but trying to preserve as much of the information one generates, and offering asynchronous access to it.”

Read the entire article here.

Image: Eterni.me screenshot. Courtesy of Eterni.

Fast Fashion and Smartphones

google-search-teen-fashion

Teen retail isn’t what it used to be. Once dominated by the likes of Aeropostale, Abercrombie and Fitch, and American Eagle, the sector is in a downward spiral. Many retail analysts place the blame on the internet. While discretionary income is down and unemployment is up among teens, there are two other key factors driving the change: first, smartphones loaded with apps seem to be more important to a teen’s self identity than an emblazoned tee-shirt; second, fast-fashion houses, such as H&M, can churn out fresh designs at a fraction thanks to fully integrated, on-demand supply chains. Perhaps, the silver lining in all of this, if you could call it such, is that malls may soon become the hang-out for old-timers.

From the NYT:

Luring young shoppers into traditional teenage clothing stores has become a tough sell.

When 19-year-old Tsarina Merrin thinks of a typical shopper at some of the national chains, she doesn’t think of herself, her friends or even contemporaries.

“When I think of who is shopping at Abercrombie,” she said, “I think it’s more of people’s parents shopping for them.”

Sales are down across the shelves of many traditional teenage apparel retailers, and some analysts and others suggest that it’s not just a tired fashion sense causing the slump. The competition for teenage dollars, at a time of high unemployment within that age group, spans from more stores to shop in to more tempting technology.

And sometimes phones loaded with apps or a game box trump the latest in jeans.

Mainstays in the industry like Abercrombie & Fitch, American Eagle Outfitters and Aéropostale, which dominated teenage closets for years, have been among those hit hard.

The grim reports of the last holiday season have already proved punishing for senior executives at the helm of a few retailers. In a move that caught many analysts by surprise, the chief executive of American Eagle, Robert L. Hanson, announced he was leaving the company last week. And on Tuesday, Abercrombie announced they were making several changes to the company’s board and leadership, including separating the role of chief executive and chairman.

Aside from those shake-ups, analysts are saying they do not expect much improvement in this retail sector any time soon.

According to a survey of analysts conducted by Thomson Reuters, sales at teenage apparel retailers open for more than a year, like Wet Seal, Zumiez, Abercrombie and American Eagle, are expected to be 6.4 percent lower in the fourth quarter over the previous period. That is worse than any other retail category.

“It’s enough to make you think the teen is going to be walking around naked,” said John D. Morris, an analyst at BMO Capital Markets. “What happened to them?”

Paul Lejuez, an analyst at Wells Fargo, said he and his team put out a note in May on the health of the teenage sector and department stores called “Watch Out for the Kid With the Cough.” (Aéropostale was the coughing teenager.) Nonetheless, he said, “We ended up being surprised just how bad things got so quickly. There’s really no sign of life anywhere among the traditional players.”

Causes are ticked off easily. Mentioned often is the high teenage unemployment rate, reaching 20.2 percent among 16- to 19-year-olds, far above the national rate of 6.7 percent.

Cheap fashion has also driven a more competitive market. So-called fast-fashion companies, like Forever 21 and H&M, which sell trendy clothes at low prices, have muscled into the space, while some department stores and discount retailers like T. J. Maxx now cater to teenagers, as well.

“You can buy a plaid shirt at Abercrombie that’s like $70,” said Daniela Donayre, 17, standing in a Topshop in Manhattan. “Or I can go to Forever 21 and buy the same shirt for $20.”

Online shopping, which has been roiling the industry for years, may play an especially pronounced role in the teenage sector, analysts say. A study of a group of teenagers released in the fall by Piper Jaffray found that more than three-fourths of young men and women said they shopped online.

Not only did teenagers grow up on the Internet, but it has shaped and accelerated fashion cycles. Things take off quickly and fade even faster, watched by teenagers who are especially sensitive to the slightest shift in the winds of a trend.

Matthew McClintock, an analyst at Barclays, pointed to Justin Bieber as an example.

“Today, if you saw that Justin Bieber got arrested drag-racing,” Mr. McClintock said, “and you saw in the picture that he had on a cool red shirt, then you can go online and find that cool red shirt and have it delivered to you in two days from some boutique in Los Angeles.

“Ten years ago, teens were dependent on going to Abercrombie & Fitch and buying from the select items that Mike Jeffries, the C.E.O., thought would be popular nine months ago.”

Read the entire story here.

Image courtesy of Google Search.

Your Friends Are Friendlier… And…

friends-cast

Your friends have more friends than you. But wait there’s more not-so-good news. Not only are your friends friendlier and befriended more than you, they are also likely to be wealthier and happier. How can this be, you may ask? It’s all down to averaging and the mathematics of networks and their interconnections. This so-called Friendship Paradox manifests itself in the dynamics of all social networks — it applies online as well as in the real world.

From Technology Review:

Back in 1991, the sociologist Scott Feld made a surprising discovery while studying the properties of social networks. Feld calculated the average number of friends that a person in the network has and compared this to the average number of friends that these friends had.

Against all expectations it turned out that the second number is always bigger than the first. Or in other words, your friends have more friends than you do.

Researchers have since observed the so-called friendship paradox in a wide variety of situations. On Facebook, your friends will have more friends than you have. On Twitter, your followers will have more followers than you do. And in real life, your sexual partners will have had more partners than you’ve had. At least, on average.

Network scientists have long known that this paradoxical effect is the result of the topology of networks—how they are connected together. That’s why similar networks share the same paradoxical properties.

But are your friends also happier than you are, or richer, or just better? That’s not so clear because happiness and wealth are not directly represented in the topology of a friendship network. So an interesting question is how far the paradox will go.

Today, we get an answer thanks to the work of Young-Ho Eom at the University of Toulouse in France and Hang-Hyun Jo at Aalto University in Finland. These guys have evaluated the properties of different characteristics on networks and worked out the mathematical conditions that determine whether the paradox applies to them or not. Their short answer is yes: your friends probably are richer than you are.

The paradox arises because numbers of friends people have are distributed in a way that follows a power law rather than an ordinary linear relationship. So most people have a few friends while a small number of people have lots of friends.

It’s this second small group that causes the paradox. People with lots of friends are more likely to number among your friends in the first place. And when they do, they significantly raise the average number of friends that your friends have. That’s the reason that, on average, your friends have more friends than you do.

But what of other characteristics, such as wealth and happiness, which are not represented by the network topology?

To study other types of network, Eom and Jo looked at two academic networks in which scientists are linked if they have co-authored a scientific paper together. Each scientist is a node in the network and the links arise between scientists who have been co-authors.

Sure enough, the paradox raises its head in this network too. If you are a scientist, your co-authors will have more co-authors than you, as reflected in the network topology. But curiously, they will also have more publications and more citations than you too.

Eom and Jo call this the “generalized friendship paradox” and go on to derive the mathematical conditions in which it occurs. They say that when a paradox arises as a result of the way nodes are connected together, any other properties of these nodes demonstrate the same paradoxical nature, as long as they are correlated in certain way.

As it turns out, number of publications and citations meet this criteria. And so too do wealth and happiness. So the answer is yes: your friends probably are richer and happier than you are.

That has significant implications for the way people perceive themselves given that their friends will always seem happier, wealthier and more popular than they are. And the problem is likely to be worse in networks where this is easier to see. “This might be the reason why active online social networking service users are not happy,” say Eom and Jo, referring to other research that has found higher levels of unhappiness among social network users.

So if you’re an active Facebook user feeling inadequate and unhappy because your friends seem to be doing better than you are, remember that almost everybody else on the network is in a similar position.

Read the entire article here.

Image: Cast of the CBS TV show Friends. Courtesy of Vanity Fair, CBS and respective rights holders.

The Spacetime Discontinuum

Einstein transformed our notions of the universe, teaching us, amongst other things, that time is relative to the velocity of the observer. While he had in mind no less than the entire cosmos when constructing his elegant theories, he failed to consider relativity in the home and workplace, and specifically how women and men experience time differently.

From the WSJ:

Several years ago, while observing a parenting group in Minnesota, I was struck by a confession one of the women made to her peers: She didn’t really care that her husband did the dishes after dinner. Sure, it was swell of him, and she had friends whose husbands did less. But what she really wanted, at that point in her day, was for her husband to volunteer to put the kids to bed. She would have been glad to sit in the kitchen on her own for a few minutes with the water running and her mind wandering. Another woman chimed in: “Totally. The dishes don’t talk back to you.”

According to the American Time Use Survey—which asks thousands of Americans annually to chronicle how they spend their days—men and women now work roughly the same number of hours a week (though men work more paid hours, and women more unpaid). Given this balanced ledger, one might guess that all would finally be quiet on the domestic front—that women would finally have stopped wondering how they, rather than their husbands, got suckered into such a heavy load. But they haven’t. The question is: Why?

Part of the problem is that averages treat all data as if they’re the same and therefore combinable, which often results in a kind of absurdity. On average, human beings have half an Adam’s apple, but no one thinks to lump men and women together this way. Similarly, we should not assume that men and women’s working hours are the same in kind. The fact is, men and women experience their time very differently.

For starters, not all work is created equal. An hour spent on one kind of task is not necessarily the equivalent of an hour spent on another. Take child care, a task to which mothers devote far more hours than dads. It creates much more stress in women than other forms of housework. In “Alone Together” (2007), a comprehensive look at the state of American marriage, the authors found that if women believe child care is unevenly divided in their homes, this imbalance is much more likely to affect their marital happiness than a perceived imbalance in, say, vacuuming.

Or consider night duty. Sustained sleep deprivation, as we know, consigns people to their own special league of misery. But it’s generally mothers, rather than fathers, who are halfway down the loonytown freeway to hysterical exhaustion, at least in the early years of parenting. According to the American Time Use Survey, women in dual-earner couples are three times more likely to report interrupted sleep if they have a child under the age of 1, and stay-at-home mothers are six times as likely to get up with their children as are stay-at-home fathers.

Funny: I once sat on a panel with Adam Mansbach, the author of the best-selling parody “Go the F— to Sleep.” At one point in the discussion, he conceded that his partner put his child to bed most nights. He may have written a book about the tyranny of toddlers at bedtime, but in his house, it was mainly Mom’s problem.

Complicating matters, mothers assume a disproportionate number of time-sensitive domestic tasks, whether it’s getting their toddlers dressed for school or their 12-year-olds off to swim practice. Their daily routine is speckled with what sociologists Annette Lareau and Elliot Weininger call “pressure points,” or nonnegotiable demands that make their lives, as the authors put it, “more frenetic.”

These deadlines have unintended consequences. They force women to search for wormholes in the time-space continuum simply to accomplish all the things that they need to do. In 2011, the sociologists Shira Offer and Barbara Schneider found that mothers spend, on average, 10 extra hours a week multitasking than do fathers “and that these additional hours are mainly related to time spent on housework and child care.”

When fathers spend time at home, on the other hand, it reduces their odds of multitasking by over 30%. Which may explain why, a few years ago, researchers from UCLA found that a father in a room by himself was the “person-space configuration observed most frequently” in their close study of 32 families at home. It may also explain why many fathers manage to finish the Sunday paper while their wives do not—they’re not constantly getting up to refill bowls of Cheerios.

Being compelled to divide and subdivide your time doesn’t just compromise your productivity and lead to garden-variety discombobulation. It also creates a feeling of urgency—a sense that no matter how tranquil the moment, no matter how unpressured the circumstances, there’s always a pot somewhere that’s about to boil over.

Read the entire essay here.

Online Social Networks as Infectious Diseases

Yersinia_pestis

A new research study applies the concepts of infectious diseases to online social networks. By applying epidemiological modelling to examine the dynamics of networks, such as MySpace and Facebook, researchers are able to analyze the explosive growth — the term “viral” is not coincidental — and ultimate demise of such networks. So, is Facebook destined to suffer a fate similar to Myspace, Bebo, polio and the bubonic plague? These researchers from Princeton think so, estimating Facebook will lose 80 percent of its 1.2 billion users by 2017.

From the Guardian:

Facebook has spread like an infectious disease but we are slowly becoming immune to its attractions, and the platform will be largely abandoned by 2017, say researchers at Princeton University (pdf).

The forecast of Facebook’s impending doom was made by comparing the growth curve of epidemics to those of online social networks. Scientists argue that, like bubonic plague, Facebook will eventually die out.

The social network, which celebrates its 10th birthday on 4 February, has survived longer than rivals such as Myspace and Bebo, but the Princeton forecast says it will lose 80% of its peak user base within the next three years.

John Cannarella and Joshua Spechler, from the US university’s mechanical and aerospace engineering department, have based their prediction on the number of times Facebook is typed into Google as a search term. The charts produced by the Google Trends service show Facebook searches peaked in December 2012 and have since begun to trail off.

“Ideas, like diseases, have been shown to spread infectiously between people before eventually dying out, and have been successfully described with epidemiological models,” the authors claim in a paper entitled Epidemiological modelling of online social network dynamics.

“Ideas are spread through communicative contact between different people who share ideas with each other. Idea manifesters ultimately lose interest with the idea and no longer manifest the idea, which can be thought of as the gain of ‘immunity’ to the idea.”

Facebook reported nearly 1.2 billion monthly active users in October, and is due to update investors on its traffic numbers at the end of the month. While desktop traffic to its websites has indeed been falling, this is at least in part due to the fact that many people now only access the network via their mobile phones.

For their study, Cannarella and Spechler used what is known as the SIR (susceptible, infected, recovered) model of disease, which creates equations to map the spread and recovery of epidemics.

They tested various equations against the lifespan of Myspace, before applying them to Facebook. Myspace was founded in 2003 and reached its peak in 2007 with 300 million registered users, before falling out of use by 2011. Purchased by Rupert Murdoch’s News Corp for $580m, Myspace signed a $900m deal with Google in 2006 to sell its advertising space and was at one point valued at $12bn. It was eventually sold by News Corp for just $35m.

The 870 million people using Facebook via their smartphones each month could explain the drop in Google searches – those looking to log on are no longer doing so by typing the word Facebook into Google.

But Facebook’s chief financial officer David Ebersman admitted on an earnings call with analysts that during the previous three months: “We did see a decrease in daily users, specifically among younger teens.”

Investors do not appear to be heading for the exit just yet. Facebook’s share price reached record highs this month, valuing founder Mark Zuckerberg’s company at $142bn.

Read the entire article here.

Image: Scanning electron microscope image of Yersinia pestis, the bacterium responsible for bubonic plague. Courtesy of Wikipedia.

 

Lies By Any Other Name

Lies_and_the_lying_liarsCertain gestures and facial movements are usually good indicators of a lie in progress. If your boss averts her eyes when she tells you “what a good employee you are”, or if your spouse looks at his finger nails when telling you “how gorgeous your new dress looks”, you can be almost certain that you are being told some half-truths or mistruths. Psychologists have studied these visual indicators for as long as humans have told lies.

Since dishonesty is so widespread and well-studied it comes as no surprise that there are verbal cues as well — just as telling as sweaty palms. A well-used verbal clue to insincerity, ironically, is the phrase “to be honest“. Verbal tee-ups such as this are known by behavioral scientists as qualifiers or performatives. There is a growing list.

From the WSJ:

A friend of mine recently started a conversation with these words: “Don’t take this the wrong way…”

I wish I could tell you what she said next. But I wasn’t listening—my brain had stalled. I was bracing for the sentence that would follow that phrase, which experience has taught me probably wouldn’t be good.

Certain phrases just seem to creep into our daily speech. We hear them a few times and suddenly we find ourselves using them. We like the way they sound, and we may find they are useful. They may make it easier to say something difficult or buy us a few extra seconds to collect our next thought.

Yet for the listener, these phrases are confusing. They make it fairly impossible to understand, or even accurately hear, what the speaker is trying to say.

Consider: “I want you to know…” or “I’m just saying…” or “I hate to be the one to tell you this…” Often, these phrases imply the opposite of what the words mean, as with the phrase, “I’m not saying…” as in “I’m not saying we have to stop seeing each other, but…”

Take this sentence: “I want to say that your new haircut looks fabulous.” In one sense, it’s true: The speaker does wish to tell you that your hair looks great. But does he or she really think it is so or just want to say it? It’s unclear.

Language experts have textbook names for these phrases—”performatives,” or “qualifiers.” Essentially, taken alone, they express a simple thought, such as “I am writing to say…” At first, they seem harmless, formal, maybe even polite. But coming before another statement, they often signal that bad news, or even some dishonesty on the part of the speaker, will follow.

“Politeness is another word for deception,” says James W. Pennebaker, chair of the psychology department of the University of Texas at Austin, who studies these phrases. “The point is to formalize social relations so you don’t have to reveal your true self.”

In other words, “if you’re going to lie, it’s a good way to do it—because you’re not really lying. So it softens the blow,” Dr. Pennebaker says.

Of course, it’s generally best not to lie, Dr. Pennebaker notes. But because these sayings so frequently signal untruth, they can be confusing even when used in a neutral context. No wonder they often lead to a breakdown in personal communications.

Some people refer to these phrases as “tee-ups.” That is fitting. What do you do with a golf ball? You put it on a peg at the tee—tee it up—and then give it a giant wallop.

Betsy Schow says she felt like she was “hit in the stomach by a cannonball” the day she was preparing to teach one of her first yoga classes. A good friend—one she’d assumed had shown up to support her—approached her while she was warming up. She was in the downward facing dog pose when she heard her friend say, “I am only telling you this because I love you…”

The friend pointed out that lumps were showing beneath Ms. Schow’s yoga clothes and said people laughed at her behind her back because they thought she wasn’t fit enough to teach yoga. Ms. Schow had recently lost a lot of weight and written a book about it. She says the woman also mentioned that Ms. Schow’s friends felt she was “acting better than they were.” Then the woman offered up the name of a doctor who specializes in liposuction. “Hearing that made me feel sick,” says Ms. Schow, a 32-year-old fitness consultant in Alpine, Utah. “Later, I realized that her ‘help’ was no help at all.”

Tee-ups have probably been around as long as language, experts say. They seem to be used with equal frequency by men and women, although there aren’t major studies of the issue. Their use may be increasing as a result of social media, where people use phrases such as “I am thinking that…” or “As far as I know…” both to avoid committing to a definitive position and to manage the impression they make in print.

“Awareness about image management is increased any time people put things into print, such as in email or on social networks,” says Jessica Moore, department chair and assistant professor at the College of Communication at Butler University, Indianapolis. “Thus people often make caveats to their statements that function as a substitute for vocalized hedges.” And people do this hedging—whether in writing or in speech—largely unconsciously, Dr. Pennebaker says. “We are emotionally distancing ourselves from our statement, without even knowing it,” he says.

So, if tee-ups are damaging our relationships, yet we often don’t even know we’re using them, what can we do? Start by trying to be more aware of what you are saying. Tee-ups should serve as yellow lights. If you are about to utter one, slow down. Proceed with caution. Think about what you are about to say.

“If you are feeling a need to use them a lot, then perhaps you should consider the possibility that you are saying too many unpleasant things to or about other people,” says Ellen Jovin, co-founder of Syntaxis, a communication-skills training firm in New York. She considers some tee-up phrases to be worse than others. “Don’t take this the wrong way…” is “ungracious,” she says. “It is a doomed attempt to evade the consequences of a comment.”

Read the entire article here.

Image: Lies and the Lying Liars Who Tell Them, book cover, by Al Franken. Courtesy of Wikipedia.

$28,000 Per Night

The New York Palace - Jewel Suite

A seven-night stay at an ultra-luxurious hotel suite for the super-rich will set you back a staggering $200,000. To put it into perspective, this is just slightly over $196,500 — the median U.S house price at the end of 2013. The Jewel Suite by Martin Katz at the New York Palace hotel commands a princely sum of $28,000 per night.

From the NYT:

In most hotels, luxury is measured by the thread count of the linens (minimum 400, please) or the brand of the bathroom toiletries. But for those at the highest end of the market, where the only restraint on consumption is how conspicuous they want to be, a race to the top has broken out, with hotels outdoing one another to serve this tiny, if highly visible, niche.

Take the Jewel Suite by Martin Katz at the New York Palace, one of two recently opened specialty suites. The three-story, 5,000-square-foot space — a sort of penthouse Versailles — itself resembles a jewel box, albeit one with its own private elevator and views of the Empire State and Chrysler Buildings.

It’s hard to imagine Louis XIV being left wanting. The floor in the entryway is glittering black marble arranged in a sunburst pattern, while a 20-foot crystal chandelier hangs from the ceiling. The living room sofa is a brilliant sapphire blue and a tufted ivory chaise has a pearlescent sheen. Two floors up, in a second living room next to a vast private terrace, the wet bar (one of two in the suite) and half-bath are swathed in a sparkling wall covering, and an angular lavender sofa calls to mind an amethyst crystal. Iridescent tiles lining the private rooftop hot tub give the impression of sinking into a giant opal.

And then there are the jewels themselves: More than a million dollars of the jewelry designer’s work is displayed in five museum-like cases in the entryway, and a boudoir area in the master suite has lighting and floor-to-ceiling mirrors designed specifically for jewelry showings.

Such grandeur — or excess, depending on your point of view — is all there for the taking, starting at $25,000 a night.

“There is a very narrow market who want nothing less,” said Scott Berman, the United States hospitality and leisure practice leader at PricewaterhouseCoopers. “Price is not an issue. We’re talking about the jet set of the jet sets — high-net-worth individuals, generally foreign travelers in the U.S. who are accustomed to opulence.”

“It’s bragging rights,” said Pam Danziger, president of the luxury marketing firm Unity Marketing and author of “Putting the Luxe Back in Luxury,” published in 2011.

“I think this is just a matter of other brands trying to play catch-up to that. They don’t want to be the only hotel on the block that doesn’t have this super, super high-end offering.”

In New York, the race to capture the highest end of the market continues. In November, the Mandarin Oriental, New York, opened a 3,300-square-foot suite that includes floor-to-ceiling windows and a dining room that seats 10; its rate is $28,000 a night. The Loews Regency Hotel in New York reopened last week after a yearlong, $100 million renovation, and six one-of-a-kind suites will open in April. (Rates haven’t been set yet.)

“We want to present an image that’s commensurate with the new product,” said Jonathan Tisch, chairman of Loews Hotels. “By doing six different designs, we can create a sense of luxury in six different ways.”

“We’ve seen more and more boutique hotels and the bigger-name hotels making suites that are one-off,” said Kris Fuchs, principal at Suite New York, a furniture showroom involved in the Regency’s suite renovation. “I think it makes it extra special that you’re in a room no one else in the hotel has.”

This trend of super-suites had overseas antecedents, with demand driven by a growing cadre of the ultra-rich from around the world.

“Some of the major European capitals have had this going on in the past few years,” said David Loeb, a senior hotel analyst at Robert W. Baird & Company.

Ms. Danziger said the trend started in places like Singapore, London and major Middle Eastern cities. “You find that the new money types are the kinds given to this excessive display, valuing the display of this excessive, over-the-top consumption,” she said. “Subtlety is not appreciated.”

In the United States, this luxury race took longer to get going, in part because of the recession and a resistance to overt displays of wealth. But now, any such concerns have given way. It is perhaps most noticeable in New York City’s thriving hotel market, although spaces with similar square footage and amenities (if slightly less stratospheric rates) are surfacing in cities including Las Vegas, Miami and Dallas.

“Development is strong again,” said David Chase, general manager of the New York Palace. After struggling through the aftermath of the recession, luxury hotels are recovering and investing in capital improvements.

This week, the Ritz-Carlton in Dallas will open a 5,135-square-foot suite wing, including three adjoining suites and two rooms, for travelers who bring an entourage. “We found this need for this private area,” its general manager, Roberto van Geenen, said. Multiple interconnected spaces make it more convenient to house the phalanx of nannies, assistants, bodyguards, personal chefs and other attendants that the super-wealthy bring with them on trips.

“There are more and more hotels in that market, in Miami in particular, that are competing for very high-end leisure travelers,” Mr. Loeb said. “The growth of international travel is affecting many of the major markets in the U.S.”

“Without question this will increase the prestige of the hotel,” said John Laclé, general manager of the Hilton Bentley Miami/South Beach in Miami Beach, which opened a 3,000-square-foot penthouse in December.

Hotel industry professionals say these over-the-top suites serve a dual purpose. “A large part of what we do is creating an image,” Mr. Tisch said. Super-suites cater to the needs of billionaire travelers as well as the imaginations of middle-class tourists.

“This hotel already had a fantastic flow of high-net-worth people using our suites,” Mr. Chase said, listing Saudi diplomats and royalty, as well as Hollywood and sports stars, as regular guests.

Read the entire story here.

Image: The New York Palace – Dining room, Jewel Suite by Martin Katz. Courtesy of Martin Katz / The New York Palace.

Younger Narcissists Use Twitter…

…older narcissists use Facebook.

google-search-selfie

Online social media and social networks provide a wonderful petri-dish with which to study humanity. For those who are online and connected — and that is a significant proportion of the world’s population — their every move, click, purchase, post and like can be collected, aggregated, dissected and analyzed (and sold). These trails through the digital landscape provide a fertile ground for psychologists and social scientists of all types to examine our behaviors and motivations, in real-time. By their very nature online social networks offer researchers a vast goldmine of data from which to extract rich nuggets of behavioral and cultural trends — a digital trail is easy to find and impossible to erase. A perennial favorite for researchers is the area of narcissism (and we suspect it is a favorite of narcissists as well).

From the Atlantic:

It’s not hard to see why the Internet would be a good cave for a narcissist to burrow into. Generally speaking, they prefer shallow relationships (preferably one-way, with the arrow pointing toward themselves), and need outside sources to maintain their inflated but delicate egos. So, a shallow cave that you can get into, but not out of. The Internet offers both a vast potential audience, and the possibility for anonymity, and if not anonymity, then a carefully curated veneer of self that you can attach your name to.

In 1987, the psychologists Hazel Markus and Paula Nurius claimed that a person has two selves: the “now self” and the “possible self.” The Internet allows a person to become her “possible self,” or at least present a version of herself that is closer to it.

When it comes to studies of online narcissism, and there have been many, social media dominates the discussion. One 2010 study notes that the emergence of the possible self “is most pronounced in anonymous online worlds, where accountability is lacking and the ‘true’ self can come out of hiding.” But non-anonymous social networks like Facebook, which this study was analyzing, “provide an ideal environment for the expression of the ‘hoped-for possible self,’ a subgroup of the possible-self. This state emphasizes realistic socially desirable identities an individual would like to establish given the right circumstances.”

The study, which found that people higher in narcissism were more active on Facebook, points out that you tend to encounter “identity statements” on social networks more than you would in real life. When you’re introduced to someone in person, it’s unlikely that they’ll bust out with a pithy sound bite that attempts to sum up all that they are and all they hope to be, but people do that in their Twitter bio or Facebook “About Me” section all the time.

Science has linked narcissism with high levels of activity on Facebook, Twitter, and Myspace (back in the day). But it’s important to narrow in farther and distinguish what kinds of activity the narcissists are engaging in, since hours of scrolling through your news feed, though time-wasting, isn’t exactly self-centered. And people post online for different reasons. For example, Twitter has been shown to sometimes fulfill a need to connect with others. The trouble with determining what’s normal and what’s narcissism is that both sets of people generally engage in the same online behaviors, they just have different motives for doing so.

A recent study published in Computers in Human Behavior dug into the how and why of narcissists’ social media use, looking at both college students and an older adult population. The researchers measured how often people tweeted or updated their Facebook status, but also why, asking them how much they agreed with statements like “It is important that my followers admire me,” and “It is important that my profile makes others want to be my friend.”

Overall, Twitter use was more correlated with narcissism, but lead researcher Shaun W. Davenport, chair of management and entrepreneurship at High Point University, points out that there was a key difference between generations. Older narcissists were more likely to take to Facebook, whereas younger narcissists were more active on Twitter.

“Facebook has really been around the whole time Generation Y was growing up and they see it more as a tool for communication,” Davenport says. “They use it like other generations use the telephone… For older adults who didn’t grow up using Facebook, it takes more intentional motives [to use it], like narcissism.”

Whereas on Facebook, the friend relationship is reciprocal, you don’t have to follow someone on Twitter who follows you (though it is often polite to do so, if you are the sort of person who thinks of Twitter more as an elegant tea room than, I don’t know, someplace without rules or scruples, like the Wild West or a suburban Chuck E. Cheese). Rather than friend-requesting people to get them to pay attention to you, the primary method to attract Twitter followers is just… tweeting, which partially explains the correlation between number of tweets and narcissism.

Of course, there’s something to be said for quality over quantity—just look at @OneTweetTony and his 2,000+ followers. And you’d think that, even if you gather a lot of followers to you through sheer volume of content spewed, eventually some would tire of your face’s constant presence in their feed and leave you. W. Keith Campbell, head of the University of Georgia’s psychology department and author of The Narcissism Epidemic: Living in the Age of Entitlement, says that people don’t actually make the effort to unfriend or unfollow someone that often, though.

“What you find in real life with narcissists is that they’re very good at gaining friends and becoming leaders, but eventually people see through them and stop liking them,” he says. “Online, people are very good at gaining relationships, but they don’t fall off naturally. If you’re incredibly annoying, they just ignore you, and even then it might be worth it for entertainment value. There’s a reason why, on reality TV, you find high levels of narcissism. It’s entertaining.”

Also like reality TV stars, narcissists like their own images. They show a preference for posting photos on Facebook, but Campbell clarifies that it’s the type of photos that matter—narcissists tend to choose more attractive, attention-seeking photos. In another 2011 study, narcissistic adolescents rated their own profile pictures as “more physically attractive, more fashionable, more glamorous, and more cool than their less narcissistic peers did.”

Though social media is an obvious and much-discussed bastion of narcissism, online role-playing games, the most famous being World of Warcraft, have been shown to hold some attraction as well. A study of 1,471 Korean online gamers showed narcissists to be more likely to be addicted to the games than non-narcissists. The concrete goals and rewards the games offer allow the players to gather prestige: “As you play, your character advances by gaining experience points, ‘leveling-up’ from one level to the next while collecting valuables and weapons and becoming wealthier and stronger,” the study reads. “In this social setting, excellent players receive the recognition and attention of others, and gain power and status.”

And if that power comes through violence, so much the better. Narcissism has been linked to aggression, another reason for the games’ appeal. Offline, narcissists are often bullies, though attempts to link narcissism to cyberbullying have resulted in a resounding “maybe.”

 “Narcissists typically have very high self esteem but it’s very fragile self esteem, so when someone attacks them, that self-esteem takes a dramatic nosedive,” Davenport says. “They need more wins to combat those losses…so the wins they have in that [virtual] world can boost their self-esteem.”

People can tell when you are attempting to boost your self-esteem through your online presence. A 2008 study had participants rate Facebook pages (which had already been painstakingly coded by researchers) for 37 different personality traits. The Facebook page’s owners had previously taken the Narcissistic Personality Inventory, and when it was there, the raters picked up on it.

Campbell, one of the researchers on that study, tempers now: “You can detect it, but it’s not perfect,” he says. “It’s sort of like shaving in your car window, you can do it, but it’s not perfect.”

Part of the reason why may be that, as we see more self-promoting behavior online, whether it’s coming from narcissists or not, it becomes more accepted, and thus, widespread.

Though, according to Davenport, the accusation that Generation Y, or—my least favorite term—Millennials, is the most narcissistic generation yet has been backed up by data, he wonders if it’s less a generational problem than just a general shift in our society.

“Some of it is that you see the behavior more on Facebook and Twitter, and some of it is that our society is becoming more accepting of narcissistic behavior,” Davenport says. “I do wonder if at some point the pendulum will swing back a little bit. Because you’re starting to see more published about ‘Is Gen Y more narcissistic?’, ‘What does this mean for the workplace?’, etc. All those questions are starting to become common conversation.”

When asked if our society is moving in a more narcissistic direction, Campbell replied: “President Obama took a selfie at Nelson Mandela’s funeral. Selfie was the word of the year in 2013. So yeah, this stuff becomes far more accepted.”

Read the entire article here.

Images courtesy of Google Search and respective “selfie” owners.

MondayMap: Best of the Worst

The-United-States-of-Shame

Today’s map is not for the faint of heart, but fascinating nonetheless. It tells us that if are a resident of West Virginia you are more likely to die from a heart attack, whereas if you’re from Alabama you’ll die from a stroke, and in Kentucky, well, cancer will get you first, but in Georgia you more likely to contract the flu.

Utah seems to have the highest predilection for porn, while Rhode Islanders love their illicit drugs, Coloradans prefer only cocaine and residents of New Mexico have a penchant for alcohol. On the educational front, Maine tops the list with the lowest SAT scores, but Texas has the lowest high school graduation rates.

The map is based on a wide collection of published statistics, and a few less objective measures such as in the case of North Dakota (ugliest residents).

Find more details about the map here.

Map courtesy of Jeff Wysaski over at his blog Pleated Jeans.

The Diminishing Value of the Ever More Expensive College Degree

graduationParadoxically the U.S. college degree is becoming less valuable while it continues an inexorable rise in cost. With academic standards now generally lower than ever and grade inflation pervasive most recent college graduates are in a bind — limited employment prospects and a huge debt burden. Something must give soon, and its likely to be the colleges.

From WSJ:

The American political class has long held that higher education is vital to individual and national success. The Obama administration has dubbed college “the ticket to the middle class,” and political leaders from Education Secretary Arne Duncan to Federal Reserve Chairman Ben Bernanke have hailed higher education as the best way to improve economic opportunity. Parents and high-school guidance counselors tend to agree.

Yet despite such exhortations, total college enrollment has fallen by 1.5% since 2012. What’s causing the decline? While changing demographics—specifically, a birth dearth in the mid-1990s—accounts for some of the shift, robust foreign enrollment offsets that lack. The answer is simple: The benefits of a degree are declining while costs rise.

A key measure of the benefits of a degree is the college graduate’s earning potential—and on this score, their advantage over high-school graduates is deteriorating. Since 2006, the gap between what the median college graduate earned compared with the median high-school graduate has narrowed by $1,387 for men over 25 working full time, a 5% fall. Women in the same category have fared worse, losing 7% of their income advantage ($1,496).

A college degree’s declining value is even more pronounced for younger Americans. According to data collected by the College Board, for those in the 25-34 age range the differential between college graduate and high school graduate earnings fell 11% for men, to $18,303 from $20,623. The decline for women was an extraordinary 19.7%, to $14,868 from $18,525.

Meanwhile, the cost of college has increased 16.5% in 2012 dollars since 2006, according to the Bureau of Labor Statistics’ higher education tuition-fee index. Aggressive tuition discounting from universities has mitigated the hike, but not enough to offset the clear inflation-adjusted increase. Even worse, the lousy economy has caused household income levels to fall, limiting a family’s ability to finance a degree.

This phenomenon leads to underemployment. A study I conducted with my colleague Jonathan Robe, the 2013 Center for College Affordability and Productivity report, found explosive growth in the number of college graduates taking relatively unskilled jobs. We now have more college graduates working in retail than soldiers in the U.S. Army, and more janitors with bachelor’s degrees than chemists. In 1970, less than 1% of taxi drivers had college degrees. Four decades later, more than 15% do.

This is only partly the result of the Great Recession and botched public policies that have failed to produce employment growth. It’s also the result of an academic arms race in which universities have spent exorbitant sums on luxury dormitories, climbing walls, athletic subsidies and bureaucratic bloat. More significantly, it’s the result of sending more high-school graduates to college than professional fields can accommodate.

In 1970, when 11% of adult Americans had bachelor’s degrees or more, degree holders were viewed as the nation’s best and brightest. Today, with over 30% with degrees, a significant portion of college graduates are similar to the average American—not demonstrably smarter or more disciplined. Declining academic standards and grade inflation add to employers’ perceptions that college degrees say little about job readiness.

There are exceptions. Applications to top universities are booming, as employers recognize these graduates will become our society’s future innovators and leaders. The earnings differential between bachelor’s and master’s degree holders has grown in recent years, as those holding graduate degrees are perceived to be sharper and more responsible.

But unless colleges plan to offer master’s degrees in janitorial studies, they will have to change. They currently have little incentive to do so, as they are often strangled by tenure rules, spoiled by subsides from government and rich alumni, and more interested in trivial things—second-rate research by third-rate scholars; ball-throwing contests—than imparting knowledge. Yet dire financial straits from falling demand for their product will force two types of changes within the next five years.

Image: college graduates. Courtesy of Business Insider.

Dear IRS: My Tax Return is Late Because…

google-search-goldfish

Completing an annual tax return, and sending even more hard-earned cash, to the government is not much fun for anyone. So, it’s no surprise that many people procrastinate. In the UK, the organization entrusted with gathering pounds and pennies from the public is Her Majesty’s Revenue and Customs department — the equivalent of the Internal Revenue Service (IRS) in the US.

HMRC recently released a list of the worst excuses from taxpayers for not filing their returns on time. It includes such gems as “late due to death of a pet goldfish” and “late due to run in with a cow.” This re-confirms that the British are indeed the eighth wonder of the world.

From the Telegraph:

A builder who handed in his tax return late blamed the death of his pet goldfish, while a farmer said it was the fault of an unruly cow.

A third culprit said he failed to send in his forms after becoming obsessed with an erupting volcano on the television news.

They were among thousands of excuses used by individuals and businesses last year in a bid to avoid paying a penalty for a late tax return.

But, while HM Revenue & Customs says it considers genuine explanations, it has little regard for lame excuses.

As the top ten was disclosed, officials said all had been hit with £100 fines for late returns. They had all appealed, but lost their actions.

The list was released to encourage the self-employed, and other taxpayers, to meet this year’s January 31 deadline. In all, 10.9 million people are due to file tax returns this month. The number required to fill in a self-assessment form has been inflated by changes to Child Benefit. Any household with an individual earning more than £50,000 must now complete the form if they still receive the benefit.

Ruth Owen, the director general of personal tax, said: “There will always be unforeseen events that mean a taxpayer could not file their tax return on time.

“However, your pet goldfish passing away isn’t one of them.”

The ten worst excuses:

1. My pet goldfish died (self-employed builder)

2. I had a run-in with a cow (Midlands farmer)

3. After seeing a volcanic eruption on the news, I couldn’t concentrate on anything else (London woman)

4. My wife won’t give me my mail (self-employed trader)

5. My husband told me the deadline was March 31, and I believed him (Leicester hairdresser)

6. I’ve been far too busy touring the country with my one-man play (Coventry writer)

7. My bad back means I can’t go upstairs. That’s where my tax return is (a working taxi driver)

8. I’ve been cruising round the world in my yacht, and only picking up post when I’m on dry land (South East man)

9. Our business doesn’t really do anything (Kent financial services firm)

10. I’ve been too busy submitting my clients’ tax returns (London accountant)

Read the entire article here.

Image courtesy of Google Search.

The Military-Industrial-Ski-Resort-Complex

This undated picture released by North KThe demented machinations of the world’s greatest living despot, Kim Jung-un, continue. This time the great dictator is on the piste, inspecting a North Korean ski resort newly outfitted with two chair-lifts. And, no Dennis Rodman in sight.

From the Guardian:

It may not have the fir-lined pistes and abundant glühwein of the Swiss resorts of Linden or Wichtracht, close to where Kim Jong-un was educated, but the North Korean leader’s new ski resort at least has a ski lift.

In pictures released by the Korean Central News Agency on Tuesday, Kim can be seen riding the chair lift and admiring the empty pistes.

In August, Switzerland refused to supply machinery to North Korea in a £4.5m deal, describing it as a “propaganda” project, but North Korea has managed to acquire two ski lifts.

Kim took a test ride on one at the Masik Pass ski resort, which he said was “at the centre of the world’s attention”.

He noted “with great satisfaction” that everything was “impeccable” and ordered the authorities to serve the people well so that visitors may “keenly feel the loving care of the party”. He also commanded that the opening ceremony should be held at the earliest possible date.

The resort was described by the news agency as a “great monumental structure in the era of Songun,” referring to the nation’s “military first” policy.

Thousands of soldiers and workers, so called “shock brigades”, built the slopes, hotels and amenities. Earlier this year reporters witnessed workers pounding at the stone with hammers, young women marching with shovels over their shoulders and minivans equipped with loudspeakers blasting patriotic music into the mountain air.

Kim was educated in Berne, Switzerland, where mountains were the backdrop to his studies. Some have speculated that he must have skied during his time there as well as indulging in his often-reported love of basketball.

At the resort, Kim was accompanied by military leaders and Pak Myong-chol, a sports official known to have been associated with Kim’s late uncle who was executed this month.

Jang Song-thaek, Kim’s mentor, was put to death on charges including corruption and plotting to overthrow the state.

The execution was the biggest upheaval since Kim inherited power after the death of his father Kim Jong-il, in December 2011.

Kim visited the resort in June and commanded that work be finished by the end of the year. In the new photographs, he can be seen visiting a hotel room, a spa and a ski shop.

The 30-year-old likes to be associated with expensive, high-profile leisure projects as well as the more frequent party congresses and military inspections. Projects associated with him include a new water park, an amusement park and a horse riding club.

The Munsu water park in Pyongyang opened in October and Kim was photographed in a cinema in the newly-renovated Rungna people’s amusement park. State media also showed footage of Kim on a rollercoaster in the same park.

North Korea is one of the poorest countries in the world with an estimated per capita GDP of under £1,100. Government attempts to increase economic growth are often frustrated by the fear of opening the country to foreign influence.

Read the entire article here.

Image: North Korean leader Kim Jong-un inspects Masik Pass ski resort, Kangwon province. Courtesy of the AFP/Getty Images, Guardian.

Teens and the Internet: Don’t Panic

Some view online social networks, smartphones and texting as nothing but bad news for the future socialization of our teens. After all, they’re usually hunched heads down, thumbs out, immersed in their own private worlds, oblivious to all else, all the while paradoxically and simultaneously, publishing and sharing anything and everything to anyone.

Yet, others, including as Microsoft researcher Danah Boyd, have a more benign view of the technological maelstrom that surrounds our kids. In her book It’s Complicated: The Social Lives of Networked Teens, she argues that teenagers aren’t doing anything different today online than their parents and grandparents often did in person. Parents will take comfort from Boyd’s analysis that today’s teens will become much like their parents: behaving and worrying about many of the same issues that their parents did. Of course, teens will find this very, very uncool indeed.

From Technology Review:

Kids today! They’re online all the time, sharing every little aspect of their lives. What’s wrong with them? Actually, nothing, says Danah Boyd, a Microsoft researcher who studies social media. In a book coming out this winter, It’s Complicated: The Social Lives of Networked Teens, Boyd argues that teenagers aren’t doing much online that’s very different from what kids did at the sock hop, the roller rink, or the mall. They do so much socializing online mostly because they have little choice, Boyd says: parents now generally consider it unsafe to let kids roam their neighborhoods unsupervised. Boyd, 36, spoke with MIT Technology Review’s deputy editor, Brian Bergstein, at Microsoft Research’s offices in Manhattan.

I feel like you might have titled the book Everybody Should Stop Freaking Out.

It’s funny, because one of the early titles was Like, Duh. Because whenever I would show my research to young people, they’d say, “Like, duh. Isn’t this so obvious?” And it opens with the anecdote of a boy who says, “Can you just talk to my mom? Can you tell her that I’m going to be okay?” I found that refrain so common among young people.

You and your colleague Alice Marwick interviewed 166 teenagers for this book. But you’ve studied social media for a long time. What surprised you?

It was shocking how heavily constrained their mobility was. I had known it had gotten worse since I was a teenager, but I didn’t get it—the total lack of freedom to just go out and wander. Young people weren’t even trying to sneak out [of the house at night]. They were trying to get online, because that’s the place where they hung out with their friends.

And I had assumed based on the narratives in the media that bullying was on the rise. I was shocked that data showed otherwise.

Then why do narratives such as “Bullying is more common online” take hold?

It’s made more visible. There is some awful stuff out there, but it frustrates me when a panic distracts us from the reality of what’s going on. One of my frustrations is that there are some massive mental health issues, and we want to blame the technology [that brings them to light] instead of actually dealing with mental health issues.

take your point that Facebook or Insta­gram is the equivalent of yesterday’s hangouts. But social media amplify everyday situations in difficult new ways. For example, kids might instantly see on Facebook that they’re missing out on something other kids are doing together.

That can be a blessing or a curse. These interpersonal conflicts ramp up much faster [and] can be much more hurtful. That’s one of the challenges for this cohort of youth: some of them have the social and emotional skills that are necessary to deal with these conflicts; others don’t. It really sucks when you realize that somebody doesn’t like you as much as you like them. Part of it is, then, how do you use that as an opportunity not to just wallow in your self-pity but to figure out how to interact and be like “Hey, let’s talk through what this friendship is like”?

You contend that teenagers are not cavalier about privacy, despite appearances, and adeptly shift sensitive conversations into chat and other private channels.

Many adults assume teens don’t care about privacy because they’re so willing to participate in social media. They want to be in public. But that doesn’t mean that they want to be public. There’s a big difference. Privacy isn’t about being isolated from others. It’s about having the capacity to control a social situation.

So if parents can let go of some common fears, what should they be doing?

One thing that I think is dangerous is that we’re trained that we are the experts at everything that goes on in our lives and our kids’ lives. So the assumption is that we should teach them by telling them. But I think the best way to teach is by asking questions: “Why are you posting that? Help me understand.” Using it as an opportunity to talk. Obviously there comes a point when your teenage child is going to roll their eyes and go, “I am not interested in explaining anything more to you, Dad.”

The other thing is being present. The hardest thing that I saw, overwhelmingly—the most unhealthy environments—were those where the parents were not present. They could be physically present and not actually present.

Read the entire article here.

Asimov Fifty Years On

1957-driverless-car

In 1964, Isaac Asimov wrote an essay for the New York Times entitled, Visit the World’s Fair in 2014. The essay was a free-wheeling opinion of things to come, viewed through the lens of New York’s World’s Fair of 1964. The essay shows that even a grand master of science fiction cannot predict the future — he got some things quite right and other things rather wrong. Some examples below, and his full essay are below.

That said, what has captured recent attention is Asimov’s thinking on the complex and evolving relationship between humans and technology, and the challenges of environmental stewardship in an increasingly over-populated and resource-starved world.

So, while Asimov was certainly not a teller of fortunes, we had many insights that many, even today, still lack.

Read the entire Isaac Asimov essay here.

What Asimov got right:

“Communications will become sight-sound and you will see as well as hear the person you telephone.”

“As for television, wall screens will have replaced the ordinary set…”

“Large solar-power stations will also be in operation in a number of desert and semi-desert areas…”

“Windows… will be polarized to block out the harsh sunlight. The degree of opacity of the glass may even be made to alter automatically in accordance with the intensity of the light falling upon it.”

What Asimov got wrong:

“The appliances of 2014 will have no electric cords, of course, for they will be powered by long- lived batteries running on radioisotopes.”

“…cars will be capable of crossing water on their jets…”

“For short-range travel, moving sidewalks (with benches on either side, standing room in the center) will be making their appearance in downtown sections.”

From the Atlantic:

In August of 1964, just more than 50 years ago, author Isaac Asimov wrote a piece in The New York Times, pegged to that summer’s World Fair.

In the essay, Asimov imagines what the World Fair would be like in 2014—his future, our present.

His notions were strange and wonderful (and conservative, as Matt Novak writes in a great run-down), in the way that dreams of the future from the point of view of the American mid-century tend to be. There will be electroluminescent walls for our windowless homes, levitating cars for our transportation, 3D cube televisions that will permit viewers to watch dance performances from all angles, and “Algae Bars” that taste like turkey and steak (“but,” he adds, “there will be considerable psychological resistance to such an innovation”).

He got some things wrong and some things right, as is common for those who engage in the sport of prediction-making. Keeping score is of little interest to me. What is of interest: what Asimov understood about the entangled relationships among humans, technological development, and the planet—and the implications of those ideas for us today, knowing what we know now.

Asimov begins by suggesting that in the coming decades, the gulf between humans and “nature” will expand, driven by technological development. “One thought that occurs to me,” he writes, “is that men will continue to withdraw from nature in order to create an environment that will suit them better. “

It is in this context that Asimov sees the future shining bright: underground, suburban houses, “free from the vicissitudes of weather, with air cleaned and light controlled, should be fairly common.” Windows, he says, “need be no more than an archaic touch,” with programmed, alterable, “scenery.” We will build our own world, an improvement on the natural one we found ourselves in for so long. Separation from nature, Asimov implies, will keep humans safe—safe from the irregularities of the natural world, and the bombs of the human one, a concern he just barely hints at, but that was deeply felt at the time.

But Asimov knows too that humans cannot survive on technology alone. Eight years before astronauts’ Blue Marble image of Earth would reshape how humans thought about the planet, Asimov sees that humans need a healthy Earth, and he worries that an exploding human population (6.5 billion, he accurately extrapolated) will wear down our resources, creating massive inequality.

Although technology will still keep up with population through 2014, it will be only through a supreme effort and with but partial success. Not all the world’s population will enjoy the gadgety world of the future to the full. A larger portion than today will be deprived and although they may be better off, materially, than today, they will be further behind when compared with the advanced portions of the world. They will have moved backward, relatively.

This troubled him, but the real problems lay yet further in the future, as “unchecked” population growth pushed urban sprawl to every corner of the planet, creating a “World-Manhattan” by 2450. But, he exclaimed, “society will collapse long before that!” Humans would have to stop reproducing so quickly to avert this catastrophe, he believed, and he predicted that by 2014 we would have decided that lowering the birth rate was a policy priority.

Asimov rightly saw the central role of the planet’s environmental health to a society: No matter how technologically developed humanity becomes, there is no escaping our fundamental reliance on Earth (at least not until we seriously leave Earth, that is). But in 1964 the environmental specters that haunt us today—climate change and impending mass extinctions—were only just beginning to gain notice. Asimov could not have imagined the particulars of this special blend of planetary destruction we are now brewing—and he was overly optimistic about our propensity to take action to protect an imperiled planet.

Read the entire article here.

Image: Driverless cars as imaged in 1957. Courtesy of America’s Independent Electric Light and Power Companies/Paleofuture.

 

 

 

Content Versus Innovation

VHS-cassetteThe entertainment and media industry is not known for its innovation. Left to its own devices we would all be consuming news from broadsheets and a town crier, and digesting shows at the theater. Not too long ago the industry, led by Hollywood heavyweights, was doing its utmost to kill emerging forms of media consumption, such as the video tape cassette and the VCR.

Following numerous regulatory, legal and political skirmishes innovation finally triumphed over entrenched interests, allowing VHS tape, followed by the DVD, to flourish, albeit for a while. This of course paved the way for new forms of distribution — the rise of Blockbuster and a myriad of neighborhood video rental stores.

In a great ironic twist, the likes of Blockbuster failed to recognize signals from the market that without significant and continual innovation their business models would subsequently crumble. Now Netflix and other streaming services have managed to end our weekend visits to the movie rental store.

A fascinating article excerpted below takes a look back at the lengthy, and continuing, fight between the conservative media empires and the market’s constant pull from technological innovation.

[For a fresh perspective on the future of media distribution, see our recent posting here.]

From TechCrunch:

The once iconic video rental giant Blockbuster is shutting down its remaining stores across the country. Netflix, meanwhile, is emerging as the leader in video rental, now primarily through online streaming. But Blockbuster, Netflix and home media consumption (VCR/DVD/Blu-ray) may never have existed at all in their current form if the content industry had been successful in banning or regulating them. In 1983, nearly 30 years before thousands of websites blacked out in protest of SOPA/PIPA, video stores across the country closed in protest against legislation that would bar their market model.

A Look Back

In 1977, the first video-rental store opened. It was 600 square feet and located on Wilshire Boulevard in Los Angeles. George Atkinson, the entrepreneur who decided to launch this idea, charged $50 for an “annual membership” and $100 for a “lifetime membership” but the memberships only allowed people to rent videos for $10 a day. Despite an unusual business model, Atkinson’s store was an enormous success, growing to 42 affiliated stores in fewer than 20 months and resulting in numerous competitors.

In retrospect, Atkinson’s success represented the emergence of an entirely new market: home consumption of paid content. It would become an $18 billion dollar domestic market, and, rather than cannibalize from the existing movie theater market, it would eclipse it and thereby become a massive revenue source for the industry.

Atkinson’s success in 1977 is particularly remarkable as the Sony Betamax (the first VCR) had only gone on sale domestically in 1975 at a cost of $1,400 (which in 2013 U.S. dollars is $6,093). As a comparison, the first DVD player in 1997 cost $1,458 in 2013 dollars and the first Blu-ray player in 2006 cost $1,161 in 2013 dollars. And unlike the DVD and Blu-ray player, it would take eight years, until 1983, for the VCR to reach 10 percent of U.S. television households. Atkinson’s success, and that of his early competitors, was in catering to a market of well under 10 percent of U.S. households.

While many content companies realized this as a massive new revenue stream — e.g. 20th Century Fox buying one video rental company for $7.5 million in 1979 — the content industry lawyers and lobbyists tried to stop the home content market through litigation and regulation.

The content industry sued to ban the sale of the Betamax, the first VCR. This legal strategy was coupled by leveraging the overwhelming firepower of the content industry in Washington. If they lost in court to ban the technology and rental business model, then they would ban the technology and rental business model in Congress.

Litigation Attack

In 1976, the content industry filed suit against Sony, seeking an injunction to prevent the company from “manufacturing, distributing, selling or offering for sale Betamax or Betamax tapes.” Essentially granting this remedy would have banned the VCR for all Americans. The content industry’s motivation behind this suit was largely to deal with individuals recording live television, but the emergence of the rental industry was likely a contributing factor.

While Sony won at the district court level in 1979, in 1981 it lost at the Court of Appeals for the Ninth Circuit where the court found that Sony was liable for copyright infringement by their users — recording broadcast television. The Appellate court ordered the lower court to impose an appropriate remedy, advising in favor of an injunction to block the sale of the Betamax.

And in 1981, under normal circumstances, the VCR would have been banned then and there. Sony faced liability well beyond its net worth, so it may well have been the end of Sony, or at least its U.S. subsidiary, and the end of the VCR. Millions of private citizens could have been liable for damages for copyright infringement for recording television shows for personal use. But Sony appealed this ruling to the Supreme Court.

The Supreme Court is able to take very few cases. For example in 2009, 1.1 percent of petitions for certiorari were granted, and of these approximately 70 percent are cases where there is a conflict among different courts (here there was no conflict). But in 1982, the Supreme Court granted certiorari and agreed to hear the case.

After an oral hearing, the justices took a vote internally, and originally only one of them was persuaded to keep the VCR as legal (but after discussion, the number of justices in favor of the VCR would eventually increase to four).

With five votes in favor of affirming the previous ruling the Betamax (VCR) was to be illegal in the United States (see Justice Blackmun’s papers).

But then, something even more unusual happened – which is why we have the VCR and subsequent technologies: The Supreme Court decided for both sides to re-argue a portion of the case. Under the Burger Court (when he was Chief Justice), this only happened in 2.6 percent of the cases that received oral argument. In the re-argument of the case, a crucial vote switched sides, which resulted in a 5-4 decision in favor of Sony. The VCR was legal. There would be no injunction barring its sale.

The majority opinion characterized the lawsuit as an “unprecedented attempt to impose copyright liability upon the distributors of copying equipment and rejected “[s]uch an expansion of the copyright privilege” as “beyond the limits” given by Congress. The Court even cited Mr. Rogers who testified during the trial:

I have always felt that with the advent of all of this new technology that allows people to tape the ‘Neighborhood’ off-the-air . . . Very frankly, I am opposed to people being programmed by others.

On the absolute narrowest of legal grounds, through a highly unusual legal process (and significant luck), the VCR was saved by one vote at the Supreme Court in 1984.

Regulation Attack

In 1982 legislation was introduced in Congress to give copyright holders the exclusive right to authorize the rental of prerecorded videos. Legislation was reintroduced in 1983, the Consumer Video Sales Rental Act of 1983. This legislation would have allowed the content industry to shut down the rental market, or charge exorbitant fees, by making it a crime to rent out movies purchased commercially. In effect, this legislation would have ended the existing market model of rental stores. With 34 co-sponsors, major lobbyists and significant campaign contributions to support it, this legislation had substantial support at the time.

Video stores saw the Consumer Video Sales Rental Act as an existential threat, and on October 21, 1983, about 30 years before the SOPA/PIPA protests, video stores across the country closed down for several hours in protest. While the 1983 legislation died in committee, the legislation would be reintroduced in 1984. In 1984, similar legislation was enacted, The Record Rental Amendment of 1984, which banned the renting and leasing of music. In 1990, Congress banned the renting of computer software.

But in the face of public backlash from video retailers and customers, Congress did not pass the Consumer Video Sales Rental Act.

At the same time, the movie studios tried to ban the Betamax VCR through legislation. Eventually the content industry decided to support legislation that would require compulsory licensing rather than an outright ban. But such a compulsory licensing scheme would have drastically driven up the costs of video tape players and may have effectively banned the technology (similar regulations did ban other technologies).

For the content industry, banning the technology was a feature, not a bug.

Read the entire article here.

Image: Video Home System (VHS) cassette tape. Courtesy of Wikipedia.

MondayMap: The 124 States

USNeverWasBig

The slew of recent secessionist movements in the United States got Andrew Shears, a geography professor at Mansfield University, thinking — what would the nation look like if all previous state petitions and secessionist movements had succeeded? Well, our MondayMap shows the result: Texas would be a mere sliver of its current self; much of California would be named Reagan; the Navajo of the four corners region would have their own state; and North Dakota would cease to exist.

Read the entire article here.

Image: Map of the United States with 124 States. Courtesy of Andrew Shears.

 

Auf Wiedersehen, Pet

old VW campers parked next to each other

After almost 65 years rolling off the production line, the VW Camper nears the end of its current journey. Though, didn’t Volkswagen once claim that for its iconic Beetle, which now has an updated lease on life? Oh well. In the meantime, hippies and other traveling souls will mourn the passing of an iconic, albeit rather unreliable, mode of transport (and form of housing).

See more images here.

Image courtesy of the Guardian / Royston White.

The Benefits of BingeTV

Netflix_logoThe boss of Netflix, Ted Sarandos, is planning to kill you, with kindness. Actually, joy. His plan to recast our consumption of television programming aims to deliver mountains of joyful entertainment in place of the current wasteland of incremental TV-dross punctuated with schlock-TV.

While the plan is still in work, the fruits of Netflix’s labor are becoming apparent — binge-watching is rapidly assuming the momentum of a cultural wave in the US. The super-sized viewing gulp, from say thirteen successive hours of House of Cards or Orange is the New Black, is quickly replacing the national attention deficit disorder enabled by the once anxious and restless trigger finger on the TV-remote, as viewers flip from one mindless show to the next. Yet, by making Netflix into the next great media and entertainment empire Sarandos may be laying the foundation for an unintended, and more important, benefit — lengthening the attention-span of the average adult.

From the Guardian:

Is Ted Sarandos a force for evil? It’s a theory. Consider the evidence. Britain is already a country beset by various health-ruining, bank balance-depleting behaviours – binge-drinking, chain-smoking, overeating, watching football. Since January 2012 when Netflix launched its UK operation, Sarandos, its chief content officer, has created a new demographic of bingeing Britons –1.5 million subscribers who spend £5.99 a month to gorge on TV online.

To get a sense of what the 49-year-old Arizonan is doing to TV culture, imagine that you’ve just finished watching episode five of Joss Whedon’s Firefly on your laptop, courtesy of Netflix. You’ve got places to go, people to meet. But up pops a little box on screen saying the next episode starts in 12 seconds. Five hours later, you dimly realise that you’ve forgotten to pick up your kids from school and/or your boss has texted you 12 times wondering if you’re planning to show up today.

Dooes he feel responsible for creating this binge culture, not just here but across the world (Netflix has 38 million subscribers in 40 countries, who watch about a billion hours of TV shows and films each month), I ask Sarandos when we meet in a London hotel? He laughs. “I love it when it happens that you just have to watch. It only takes that little bit of prodding.”

Sarandos feels it is legitimate to prod the audience so that they can get what they want. Or what he thinks they want. All 13 episodes of, say, political thriller House of Cards with Kevin Spacey, or the same number of prison comedy drama Orange is the New Black with Taylor Schilling, released in one great virtual lump.

Why? “The television business is based on managed dissatisfaction. You’re watching a great television show you’re really wrapped up in? You might get 50 minutes of watching a week and then 18,000 minutes of waiting until the next episode comes along. I’d rather make it all about the joy.”

Sarandos says he got an intimation of the pleasures of binge-viewing as a teenager in Phoenix, Arizona. On Sundays in the mid-70s, he and his family would gather round the telly to watch Mary Hartman, Mary Hartman, a satire on soap operas. “If you worked, the only way could catch up with the five episodes they showed in the week was watching them back to back on Sunday night. So bingeing was already big in my subconscious.”

Years later, Sarandos binged again. “I really loved the Sopranos but didn’t have HBO. So someone would send me tapes of the show with three or four episodes. I would watch one episode and go: ‘Oh my God, I’ve got to watch one more.’ I’d watch the whole tape and champ at the bit for the next one.”

The TV revolution for which Sarandos and Netflix are responsible involves eliminating bit-champing and monetising instant gratification. Netflix has done well from that revolution: its reported net income was $29.5m for the quarter ending 30 June. Profits quintupled compared with the same period in 2012 – in part due to its new UK operation.

Sarandos hasn’t done badly either. He and his wife, former US ambassador Nicole Avant, have a $5.4m Beverly Hills property and recently bought comedian David Spade’s beachside Malibu home for $10.2m. Sarandos argues viewers have long been battling schedulers bent on stopping them seeing what they want, when they want. “Before time shifting, they would use VCRs to collect episodes and view them whenever they wanted. And, more importantly, in whatever doses they wanted. Then DVD box sets and later DVRs made that self-dosing even more sophisticated.”

He began studying how viewers consumed TV while working part-time in a strip-mall video store in the early 1980s. By 30, he was an executive for a company supplying Blockbuster with videos. In 2000, he was hired by Netflix to develop its service posting rental DVDs to customers. “We saw that people would return those discs for TV series very quickly, given they had three hours of programming on them – more quickly than they would a movie. They wanted the next hit.”

Netflix mutated from a DVD-by-post service to an on-demand internet network for films and TV series, and Sarandos found himself cutting deals with traditional TV networks to broadcast shows online a season after they were originally shown, instead of waiting for several years for them to be available for syndication.

Thanks to Netflix and its competitors, the old TV set in the living room is becoming redundant. That living-room fixture has been replaced by a host of mobile surrogates – tablet, laptop, Xbox and even smart phone.

Were that all Sarandos had achieved, he would have been minor player in the idiot box revolution. But a couple of years ago, he decided Netflix should commission its own drama series and release them globally in season-sized bundles. In making that happen, he radically changed not just how but what we watch.

Why bother? “Up till a couple of years ago, a network would make every pilot for a series into a one-off show. I started getting worried, thinking nobody’s going to make series any more, and so we wouldn’t be able to buy them [for Netflix] a season after they’ve been broadcast. So we said maybe we should develop that muscle ourselves.” Sarandos has a $2bn annual content budget, and spends as much as 10% on developing that muscle.

Strikingly, he didn’t spend that money on movies, but TV. Why? “Movies are becoming more global, which is making them less intimate. If you make a movie for the world, you don’t make it for any country.

“I think television is going in the opposite direction – richer characterisation, denser storylines – and so much more like reading a novel. It is a golden age for TV, but only because the best writers and directors increasingly like to work outside Hollywood.” Hence, perhaps, the successes of The Sopranos, The West Wing, The Wire, Downton Abbey, and by this time next year – he hopes – Netflix series such as the Wachowskis’ sci-fi drama Sense 8. TV, if Sarandos has his way, is the new Hollywood.

Netflix’s first foray into original drama came only last year with Lilyhammer, in which Steve Van Zandt, who played mobster Silvio Dante in The Sopranos, reprised his gangster chops as Frank “The Fixer” Tagliano, a mobster starting a new life in Lillehammer, Norway. The whole season of eight episodes was released on Netflix at the same time, delighting those suffering withdrawal symptoms after the end of The Sopranos. A second season is soon to be released.

Sarandos suggests Lilyhammer points the way to a new globalised future for TV drama – more than a fifth of Norway’s population watched the show. “It opened a world of possibilities – mainstream viewing of subtitled programming in the US and releasing in every language and every territory at the exact same moment. To me, that’s what the future will be like.”

Sarandos also tore up another page of the TV rulebook, the one that says each episode of a series must be the same length. “If you watched Arrested Development [the sitcom he recommissioned in May, seven years after it last broadcast] none of those episodes has the same running time – some were 28 minutes, some 47 minutes. I’m saying take as much time as you need to tell the story well. You couldn’t really do that on linear television because you have a grid, commercial breaks and the like.”

House of Cards, his second commissioned drama series whose first season was released in February, even better demonstrates Sarandos’s diabolical genius (if that is what it is). He once described himself as “a human algorithm” for his ability, developed in that Phoenix strip mall, for recommending movies based on a customer’s previous rentals. He did something similar when he commissioned House of Cards.

“It was generated by algorithm,” Sarandos says, grinning. But he’s not entirely joking. “I didn’t use data to make the show, but I used data to determine the potential audience to a level of accuracy very few people can do.”

It worked like this. In 2011, he learned that Hollywood director David Fincher, then working on his movie adaptation of The Girl with the Dragon Tattoo, was pitching his first TV series. Based on a script by Oscar-nominated writer Beau Willimon, it was to be a remake the 1990 BBC series House of Cards, and would star Kevin Spacey as an amoral US senator.

Some networks were sceptical, but Sarandos – not least because he’s a political junkie who loves political thrillers and, along with his wife, helped raise nearly $700m for Obama’s re-election campaign – was tempted. He unleashed his spreadsheets, using Netflix data to determine how many subscribers watched political dramas such as The West Wing or the original House of Cards.

“We’ve been collecting data for a long time. It showed how many Netflix members love The West Wing and the original House of Cards. It also showed who loved David Fincher’s films and Kevin Spacey’s.”

Armed with this data, Sarandos made the biggest gamble of his life. He went to David Fincher’s West Hollywood office and announced he wanted to spend $100m on not one, but two 13-part seasons of House of Cards. Based on his calculations, he says: “I felt that sounded like a pretty safe bet.”

Read the entire article here.

Image: Netflix logo. Courtesy of Wikipedia / Netflix.

Fate Isn’t All That It’s Cracked Up to Be

If you believe in the luck of the draw, the turn of a card, the spin of a wheel; if you believe in the leaves in your teacup, the lines on your palm, or the numbers in your fortune cookie; if you believe in fate or a psychic or the neighbor’s black cat, then you are all the poorer for it — perhaps not spiritually, but certainly financially.

From the Telegraph:

Strange as it sounds, a serious study has been undertaken by academics into the link between people’s propensity to trust in luck, or fate – and their financial success.

And it has concluded the less faith someone places in luck, fate or some other “external factor”, the more wealth they are likely to accumulate.

Some might say the conclusion is commonsense but the report – produced by three academics at the University of Mebourne in Australia – even came up with a figure of AUS$150,000 (£82,000), which was the difference over four years between “households who believe fate will determine their future” and “households that believe they can shape their own destiny.”

The report, here, titled “Locus of control and savings”, splits psychological profiles into two groups, those with either an “internal” or “external” “locus” of control. The latter are people who believe that fate, or luck – or other people – are the determining force in shaping their lives. Those with an “internal locus of control” are those who are “strong believers in their ability to shape their own destiny.”

The survey then linked pshychological measures of behaviour to national savings data. “We find that households in which the reference person has an internal locus of control save more both in terms of levels and as a percentage of their permanent incomes than do households with external reference persons.”

It arrived at a precise financial measure, saying: “over a four year period households with a strong sense of shaping one’s destiny are on average $150,000 better off, and save 7.7% more of their income.”

The authors claimed that although their work relied on Australian data, it would reflect trends in other developed economies.

The work is one of a growing number of studies into what motivates saving, the type of people most likely to save – and how governments can stimulate more saving.

Read the entire article here.

I Am Selfie

astronaut-selfie

Presidents, prime ministers, pop stars and the public in general — anyone and everyone armed with a smartphone and a dire need for attention did this in 2013. Without a doubt, 2013 was the year of the narcissistic selfie.

See more selfies here.

Image: Astronaut Luca Parmitano uses a digital still camera to take a photo of himself in space. Courtesy of NASA / Daily Mail.