463 stories
·
16 followers

The Science Behind Mental Overload and How to Avoid It (the overload, not the science...)

1 Share

A subject that's been talked about increasingly over the last few months is burnout among IT workers. There was a large and positive response to Stephen Nelson-Smith's presentation on the subject at Devopsdays Tel Aviv in October 2013, and again to Mike Preston's ignite talk at Devopsdays London in November.

They were both roundly praised for their courage in speaking publicly about their own burnout, but more than that they encouraged other people to share their own experiences. And the more people speak up, the less the stigma, and the more clearly we see the magnitude of the problem.

I don't think it's an overreaction to say that burnout is not an occasional unfortunate event, but rather it is a serious and frequently-occurring occupational hazard amongst knowledge workers in the tech industry and elsewhere. Just as athletes have to take especial care of their bodies, and both accept and try to reduce the occupational risk of injury, so we as knowledge workers must take equally good care of our minds. We need to ensure that our minds are trained and fit for the work we give them. We need to design and adhere to sustainable working habits and procedures. And we need to be ever vigilant for signs of mental stress or exhaustion, because they are very very common.

Academic psychology has a lot to say about how we may create optimal working conditions for our mighty but fragile brains. This article explores how cognitive hard work and effortful self-control interact and combine to deplete our mental energy and leave us prone to impaired intellectual performance and unwise decision-making.

Self-control and deliberate thought are both types of mental work, and draw from the same limited budget of mental energy. In his book, "Thinking, Fast and Slow" (Penguin, 2011), Nobel laureate psychologist Daniel Kahneman gives an example illustrating this, in which self-control refers to exerting the effort to walk faster than one's natural pace, whilst trying to do some serious creative thinking. He found that walking at a leisurely pace, which can be maintained automatically with no self-control effort, aids thought; deliberately maintaining a faster pace, however, impacts on the ability to make serious cognitive effort.

This effect isn't only apparent when we overload with simultaneous mental exercises; it also holds true with successive tasks. Kahneman describes experiments conducted by Roy Baumeister, which showed that efforts of will are tiring. Following one mentally challenging task, of cognition or self-control, we are less able - or willing - to undertake and succeed in another challenge of intellect or self-control. This effect is known as "ego depletion". Ego-depleted people more readily acquiesce to the urge to give up on a demanding task. And it holds true for a range of combinations of tasks: tests of emotional self-control followed by tests of physical stamina, resisting temptation followed by hard cognitive exercises, among others. Moreover, the list of activities and situations shown to deplete self-control is even wider, including deliberately trying not to think of certain things, making difficult choices, being patient with the bad behaviour of others and overriding our prejudices. All these draw on the same budget of mental energy, and that budget is finite. However, it seems that it's motivation that gets reduced, more than actual mental energy. Unlike with cognitive load - which really does reach hard limits - with sufficient incentive, people can override the effects of ego-depletion.

Out of the lab and back in the real world of burnout, this is most readily applied to the self-control required to make wise choices and not give in to the urge to do dumb things. Experiments have shown that when the brain is heavily taxed by cognitive effort, it becomes much harder to resist temptation. In the work environment maybe that temptation is to slack off, check email, play games, read something irrelevant (and less effortful), eat unhealthy foods, or turn to potentially addictive temptations like drink, drugs, smoking, porn and so on. And when a person is suffering from stress, anxiety, depression - all the uncomfortable mental states that go with burnout - they will often try to "self-medicate" with the same kind of things; to avoid those painful feelings by way of control strategies, typically involving distraction or numbing, which almost certainly make them feel worse, and playing right into that depleted ability to resist temptation, resulting in a damaging feedback loop.

Interestingly, when we speak of "mental energy", the word energy isn't just a metaphor. It's been shown that in cognitive effort (and efforts of will), the brain consumes a substantial amount of glucose. Kahneman uses the analogy of "a runner who draws down glucose stored in her muscles during a sprint", and Baumeister's work has further confirmed that ego-depletion can be cured in the short term by ingesting glucose.

Unsurprisingly, in addition to cognitive effort and hunger, our mental energy is also depleted by fatigue, consumption of alcohol and a short-term memory full of anxious thoughts. There's a paradox here, in that there's a danger that some of the guidance around avoiding overload and burnout - eat well, sleep enough, don't work too long, don't drink too much, don't worry - is so damn obvious and well known that it's actually very easy to ignore (There's another interesting issue going on here - Akrasia: knowing the right thing to do and still not doing it - but I plan to write about that separately, so I won't go into it in this article.). And yet it's fundamental to our personal and professional wellbeing - our vital intellectual resources are finite and need to be stewarded wisely and replenished often. This is why I'm finding it so interesting and helpful to understand the science behind these things. It makes them feel more real, more serious, and less like something our parents told us years ago and we promptly ignored.

Read the whole story
Share this story
Delete

Working simultaneously vs waiting simultaneously

2 Shares

"Multiprocessing", "multi-threading", "parallelism", "concurrency" etc. etc. can give you two kinds of benefits:

  • Doing many things at once - 1000 multiplications every cycle.
  • Waiting for many things at once - wait for 1000 HTTP requests just issued.

Some systems help with one of these but not the other, so you want to know which one - and if it's the one you need.

For instance, CPython has the infamous GIL - global interpreter lock. To what extent does the GIL render CPython "useless on multiple cores"?

  • Indeed you can hardly do many things at once - not in a single pure Python process. One thread doing something takes the GIL and the other thread waits.
  • You can however wait for many things at once just fine - for example, using the multiprocessing module (pool.map), or you could spawn your own thread pool to do the same. Many Python threads can concurrently issue system calls that wait for data - reading from TCP sockets, etc. Then instead of 1000 request-wait, request-wait steps, you issue 1000 requests and wait for them all simultaneously. Could be close to a 1000x speed-up for long waits (with a 1000-thread worker pool; more on that below). Works like a charm.

So GIL is not a problem for "simultaneous waiting" for I/O. Is GIL a problem for simultaneous processing? If you ask me - no, because:

  • If you want performance, it's kinda funny to use pure Python and then mourn the fact that you can't run, on 8 cores, Python code that's 30-50x slower than C to begin with.
  • On the other hand, if you use C bindings, then the C code could use multiple threads actually running on multiple cores just fine; numpy does it if properly configured, for instance. Numpy also uses SIMD/vector instructions (SSE etc.) - another kind of "doing many things at once" that pure Python can't do regardless of the GIL.

So IMO Python doesn't have as bad a story in this department as it's reputed to have - and if it does look bad to you, you probably can't tolerate Python's slowness doing one thing at a time in the first place.

So Python - or C, for that matter - is OK for simultaneous waiting, but is it great? Probably not as great as Go or Erlang - which let you wait in parallel for millions of things. How do they do it? Cheap context management.

Context management is a big challenge of waiting for many things at once. If you wait for a million things, you need a million sets of variables keeping track of what exactly you're waiting for (has the header arrived? then I'm waiting for the query. has it arrived? then I ask the database and wait for it etc. etc.)

If those variables are thread-local variables in a million threads, then you run into one of the problems with C - and hence OS-supported threads designed to run C. The problem is that C has no idea how much stack it's gonna need (because of the halting problem, so you can't blame C); and C has no mechanism to detect that it ran out of stack space at runtime and allocate some more (because that's how its ABIs have evolved; in theory C could do this, but it doesn't.)

So the best thing a Unixy OS could do is, give C one page for the stack (say 4K), and make say the next 1-2M of the virtual address space unaccessible (with 64b pointers, address space is cheap). When C page-faults upon stack overflow, give it more physical memory - say another 4K. This method means at least 4K of allocated physical memory per thread, or 4G for a million threads - rather wasteful. (I think in practice it's usually way worse.) All regardless of us often needing a fraction of that memory for the actual state.

And that's before we got to the cost of context switching - which can be made smaller if we use setjmp/longjmp-based coroutines or something similar, but that wouldn't help much with stack space. C's lax approach to stack management - which is the way it is to shave a few cycles off the function call cost - can thus make C terribly inefficient in terms of memory footprint (speed vs space is generally a common trade-off - it's just a bad one in the specific use case of "massive waiting" in C).

So Go/Erlang don't rely on the C-ish OS threads but roll their own - based on their stack management, which doesn't require a contiguous block of addresses. And AFAIK you really can't get readable and efficient "massive waiting" code in any other way - your alternatives, apart from the readable but inefficient threads, are:

  • Manual state machine management - yuck
  • Layered state machines as in Twisted - better, but you still have callbacks looking at state variables
  • Continuation passing as in Node.js - perhaps nicer still, but still far from the smoothness of threads/processes/coroutines

The old Node.js slides say that "green threads/coroutines can improve the situation dramatically, but there is still machinery involved". I'm not sure how that machinery - the machinery in Go or Erlang - is any worse than the machinery involved in continuation passing and event loops (unless the argument is about compatibility more than efficiency - in which case machinery seems to me a surprising choice of words.)

Millions of cheap threads or whatever you call them are exciting if you wait for many events. Are they exciting if you do many things at once? No; C threads are just fine - and C is faster to begin with. You likely don't want to use threads directly - it's ugly - but you can multiplex tasks onto threads easily enough.

A "task" doesn't need to have its own context - it's just a function bound to its arguments. When a worker thread is out of work, it grabs the task out of a queue and runs it to completion. Because the machine works - rather than waits - you don't have the problems with stack management created by waiting. You only wait when there's no more work, but never in the middle of things.

So a thread pool running millions of tasks doesn't need a million threads. It can be a thread per core, maybe more if you have some waiting - say, if you wait for stuff offloaded to a GPU/DSP.

I really don't understand how Joe Armstrong could say Erlang is faster than C on multiple cores, or things to that effect, with examples involving image processing - instead of event handling which is where Erlang can be said to be more efficient.

Finally, a hardware-level example - which kind of hardware is good at simultaneous working, and which is good at simultaneous waiting?

If your goal is parallelizing work, eventually you'll deteriorate to SIMD. SIMD is great because there's just one "manager" - instruction sequencer - for many "workers" - ALUs. CPUs, DSPs and GPUs all have SIMD. NVIDIA calls its ALUs "cores" and 16-32 ALUs running the same instruction "threads", but that's just shameless marketing. A "thread" implies, to everyone but a marketeer, independent control flow, while GPU "threads" march in lockstep.

In practice, SIMD is hard despite thousands of man-years having been invested into better languages and libraries - because telling a bunch of dumb soldiers marching in lockstep what to do is just harder than running a bunch of self-motivating threads each doing its own thing.

(Harder in one way, easier in another: marching in lockstep precludes races - non-deterministic, once-in-a-blue-moon, scary races. But races of the kind arising between worker threads can be almost completely remedied with tools. Managing the dumb ALUs can not be made easier with tools and libraries to the same extent - not even close. Where I work, roughly there's an entire team responsible for SIMD programming, while threading is mostly automatic and bugs are weeded out by automated testing.)

If, however, you expect to be waiting much of the time - for memory or for high-latency floating point operations, for instance - then hoards of hardware threads lacking their own ALUs, as in barrel threading or hyper-threading, can be a great idea, while SIMD might do nothing for you. Similarly, a bunch of weaker cores can be better than a smaller number of stronger cores. The point being, what you really need here is a cheap way to keep context and switch between contexts, while actually doing a lot at once is unlikely to be possible in the first place.

Conclusions

  • Doing little or nothing while waiting for many things is both surprisingly useful and surprisingly hard (which took me way too long to internalize both in my hardware-related and software/server-related work). It motivates things looking rather strange, such as "green threads" and hardware threads without their own ALUs.
  • Actually doing many things in parallel - to me the more "obviously useful" thing - is difficult in an entirely different way. It tends to drag in ugly languages, intrinsics, libraries etc. about as much as having to do one single thing quickly. The "parallelism" part is actually the simplest (few threads so easy context management; races either non-existent [SIMD] or very easy to weed out [worker pool running tasks])
  • People doing servers (which wait a lot) and people doing number-crunching (work) think very differently about these things. Transplanting experience/advice from one area to the other can lead to nonsensical conclusions.

See also

Parallelism and concurrency need different tools - expands on the reasons for races being easy to find in computational code - but impossible to even uniformly define for most event handling code.

Read the whole story
Share this story
Delete

Todd Terje: It's Album Time

1 Comment and 2 Shares

In early 2012, the music director of a Norwegian state-funded radio station called P3 declined to add a song called "Inspector Norse" by disco producer Todd Terje to its rotation, saying it sounded like "background music at a beach bar." When an interviewer asked him what he thought about the radio station's description, Terje said he agreed with it. "It sounds like elevator music. Good, danceable elevator music." Then, in a pun fit only for hypothetical dads, he added, "Elevate your body!" In Terje's world, there is no distinction made between beating and joining—it's all join, join, join.

It's Album Time is, as advertised, his first full-length album. The title sets the tone: Casual, confident, and unburdened by the imagined need for significance that scares so many good dance producers into losing their cool when given a bigger platform. Most of the music on it could be classified asdisco, with shades of cocktail lounge, exotica, surf instrumentals, and other styles that favor whimsy and novelty over sober artistic expression. Not that Terje isn't an artist—he is, and a careful one, fluent in history, expert with texture, and with a grasp on composition more akin to a 1960s film composer than a contemporary techno producer. But for as much ground as he covers on It's Album Time, the music feels effortless, gliding from Henry Mancini-esque detective jazz to bouncy, Stevie Wonder funk like breeze blowing through the waffle weave of a leisure suit. Conventional wisdom bears out: The looser the grip, the tighter the hold.

Despite recycling four of its twelve tracks from previously released singles and EPs, It's Album Time has a linear, cohesive feel. Instead of trying to top "Strandbar" or "Inspector Norse", Terje ties them together with short interstitial tracks—valleys that give perspective to the mountains. If he ever capitulates to the conventions of making a full-length album, it's in structure, which here is less redolent of disco than classic-rock pranksters like Paul McCartney or Frank Zappa: an introduction that features men whispering the words "It's album time" repeatedly, peaking with a ballad halfway through; closing with the joyous "Inspector Norse", and dying away to the sound of distant applause.

Terje is, at heart, a comedian. "I like my music very fruity," he told Resident Advisor in 2007. "Lots of percussion, lots of silly effects." When I interviewed the illustrator Bendik Kaltenborn, who has drawn the covers for most of Terje's releases including It's Album Time, he told me that the two first bonded over what Kaltenborn called a "Stupid" sense of humor. Everything about It's Album Time and Terje's self-presentation—whether it's the fart-like synth sounds, the conga-line enthusiasm, or the promo photos of him flexing his minor biceps with a pout on his face—is so studiously carefree that he sometimes seems less like a human being than an all-night party incarnate.

But like all comedy, Terje's act is held together by a taut thread of sadness. The beauty of his music is the beauty of a neon sign outside a cheap motel: It's kitschy but it knows it, and in its kitsch conveys both loneliness (it's dark outside and you've been driving for hours) and its easy resolution (it's warm inside and happy hour never ends, pink paper umbrella gratis). In the mockumentary-style video for Terje's "Inspector Norse" (which is excerpted from a Norwegian short film calledWhateverest), a failed electronic musician living with his elderly father in a small town spends a day bowling and cooking drugs from household chemicals before turning out the lights and dancing to "Inspector Norse", alone. Afterwards, he wanders the streets in facepaint, confused, crying. He survives mostly on illusions, and without happy music, he'd be lost.

The album's least representative song is the one that sticks with me the most: A cover of the Robert Palmer ballad "Johnny and Mary," sung by Roxy Music singer Bryan Ferry. The song is heavy, melancholic, and almost oppressively romantic—moods that the rest of It's Album Time feel designed to make you forget about. The lyrics tell the story of a couple who know they're alienated from each other but have hung on for so long they struggle with whether or not to break up. "Johnny's always running around, trying to find certainty," the opening line goes. "He needs all the world to confirm that he ain't lonely/ Mary counts the walls, knows he tires easily."

Ferry has spent his entire career crying crocodile tears, exploring the ways a seemingly insincere performance can ring with more feeling and pathos than something we recognize as "real". His voice—once a dazzling, cartoonish instrument, like Elvis with his finger in a wall socket—sounds hollowed-out and whispery, an old man whose wisdom brings him no comfort. In the original song, Palmer seems to hover above the characters, observing them, maybe even judging. Ferry sounds like he's in the next room, suffering troubles of his own.

As the song crests and the rippling arpeggios of Terje's synths reach their climax, you might wonder: What is a song so sad doing on an album so relentlessly upbeat? Not to ruin the mood, I think—only as a reminder that sadness is a choice. You could be "Inspector Norse" if you wanted to, peacocking wild across the dancefloor, spilling drinks on a stranger, recognizing the brevity of life by enjoying it. From an artist like Terje, it's proof that he could go deep if he wanted to. He'd just rather have fun.

Read the whole story
Share this story
Delete
1 public comment
gazuga
4 days ago
reply
Heeeeeeere's Johnny and Mary: http://tagfu.org/audio/Todd%20Terje%20-%20Johnny%20and%20Mary%20(ft.%20Bryan%20Ferry).mp3
Edmonton

How to speed read on Linux

2 Shares

Xmodulo: A startup called Spritz raised 3.5 Millions in seed money to develop an API that supposedly allows a user to read 1,000 words per minute.

Read the whole story
Share this story
Delete

Heartbleed Explanation

28 Comments and 113 Shares
Are you still there, server? It's me, Margaret.
Read the whole story
grammargirl
6 days ago
reply
Clearest explanation I've seen by FAR.
Brooklyn, NY
smadin
6 days ago
yeah, I think this does a very good job of making clear JUST HOW BAD this is.
Share this story
Delete
26 public comments
tomazed
1 day ago
reply
crystal clear
josephwebster
3 days ago
reply
This is actually a very good explanation.
Denver, CO, USA
Tobiah
4 days ago
reply
XKCD explains heartbleed
San Jose, California
CJPoll
4 days ago
reply
The tech industry discovered a bug in security software last week that affects you. This explains in layman's terms what is happening.
Rexburg, Idaho
Lacrymosa
4 days ago
reply
good simple explanation of heartbleed
Boston, MA
jchristopherslice
4 days ago
reply
Computer Science 101
Clemson, SC
expatpaul
5 days ago
reply
The best explanation of Heartbleed I've seen.
Belgium
chrisminett
5 days ago
reply
xkcd does it again!
Milton Keynes, UK
katster
6 days ago
reply
Simple is good.
Sactown, CA
mitthrawnuruodo
6 days ago
reply
Best explanation, yet.
Wherever
mrnevets
6 days ago
reply
Heartbleed: a simple explanation. It affected a huge number of websites. Be safe and change your passwords!
macjustice
6 days ago
reply
Best explanation yet.
Seattle
jkevmoses
6 days ago
reply
Great explanation of Heartbleed that is causing internet security issues all over the place.
McKinney, Texas
srsly
6 days ago
reply
You know I'm only sharing this because I've never seen a story this shared before. 56 people! 57 now.

I should get back to work.
Atlanta, Georgia
glindsey1979
6 days ago
reply
If you aren't a techie, this will explain the Heartbleed bug to you super-simply.
Aurora, IL
chrispt
6 days ago
reply
Perfect explanation of how Heartbleed works.
37.259417,-79.935122
aaronwe
6 days ago
reply
Perfect.
Sioux City, Iowa
sfringer
6 days ago
reply
In a nutshell!
North Carolina USA
JayM
6 days ago
reply
.
Shenandoah Valley, VA
automatonic
6 days ago
reply
#xkcd visualizes the #heartbleed bug with stickfigures. Never trust your input.
Eastern WA, USA
bgschaid
7 days ago
reply
You can’t explain it simpler and more to the point
bogorad
7 days ago
reply
Умеет!
Moscow, Russia
gradualepiphany
7 days ago
reply
Such a good explanation.
Los Angeles, California, USA
Covarr
7 days ago
reply
Ah, now I understand.
rohitt
5 days ago
Yes. Clear as a day
revme
7 days ago
reply
This actually makes it really clear.
Seattle, WA
teh_g
7 days ago
reply
Alt text: Are you still there, server? It's me Margaret.
Folsom, CA

18 commands to monitor network bandwidth on Linux server

1 Share
This post mentions some linux command line tools that can be used to monitor the network usage. These tools monitor the traffic flowing through network interfaces and measure the speed at which data is currently being transferred. Incoming and outgoing traffic is shown separately.
Some of the commands, show the bandwidth used by individual processes. This makes it easy to detect a process that is overusing network bandwidth.
The tools have different mechanisms of generating the traffic report. Some of the tools like nload read the "/proc/net/dev" file to get traffic stats, whereas some tools use the pcap library to capture all packets and then calculate the total size to estimate the traffic load.
Here are the command names segregated upon features

1. Overall bandwidth - nload, bmon, slurm, bwm-ng, cbm, speedometer, netload

2. Overall bandwidth (batch style output) - vnstat, ifstat, dstat, collectl

2. Bandwidth per socket connection - iftop, iptraf, tcptrack, pktstat, netwatch, trafshow

3. Bandwidth per process - nethogs
1. Nload
Nload is a commandline tool that allows users to monitor the incoming and outgoing traffic separately. It also draws out a graph to indicate the same, the scale of which can be adjusted. Easy and simple to use, and does not support many options.
So...

Read full post here
18 commands to monitor network bandwidth on Linux server

Read the whole story
Share this story
Delete
Next Page of Stories