This image is This brain painting was destined for a gallery, but it started as a tiny slice of a woman’s brain. In 2014, a tiny piece of a woman’s cerebral cortex was removed while she was undergoing surgery for epilepsy. From these few cubic millimeters of tissue, researchers from Harvard University and Google were able to create the most detailed wiring diagram of the human brain the world has ever seen.
Biologists and machine learning experts spent a decade creating an interactive map of brain tissue that contains about 57,000 cells and 150 million synapses. The map shows intertwined cells, mirror-image cell pairs, and what the researchers call unclassifiable egg-shaped “objects.” This astonishingly complex diagram is expected to help advance scientific research, from understanding human neural circuits to potential cures for disease.
“If we could make a map at very high resolution and see all the connections between different neurons and analyze that on a large scale, we might be able to identify the rules of wiring,” says Daniel Berger, one of the project’s principal investigators and an expert in connectomics, which studies how individual neurons connect to form functional networks. “This might allow us to create mechanistic models that explain how we think and how memories are stored.”
Jeff Lichtman, a professor of molecular and cellular biology at Harvard University, explains that researchers in his lab, led by Alex Shapson Koh, used an electron microscope to take pictures of cells inside the tissue to create a brain map. The 45-year-old woman’s brain tissue was stained with heavy metals that bind to lipid membranes inside the cells, which reflect electrons and make the cells visible under an electron microscope.
The tissue was then embedded in resin and cut into very thin slices just 34 nanometers thick (by comparison, a typical sheet of paper is about 100,000 nanometers thick). This was to make it easier to map—to turn a 3D problem into a 2D problem, Berger says. The team then took electron microscope images of each 2D slice, which amounted to a massive 1.4 petabytes of data.
Once the Harvard researchers had these images, they did what many people do when faced with a problem: they turned to Google. The tech giant’s team, led by Viren Jain, used machine learning algorithms to align the 2D images and create a 3D reconstruction with automatic segmentation, where components in an image (for example, different cell types) are automatically distinguished and classified. Part of the segmentation required what Lichtman calls “ground truth data,” which involved Berger, who worked closely with the Google team, manually redrawing parts of the tissue to further inform the algorithm.
“Digital technology makes it possible to see every cell in this tissue sample and display them in different colors depending on their size,” Berger explains. Traditional neuron imaging methods used for over a century, such as staining samples with a chemical called Golgi stain, hide some elements of neural tissue.