We now start the last cycle of the course by finally getting to dark energy. All the previous material can be though of as preliminary up to this point as we put it all together.
In the 1970s and 80s it was known that the limited amount of dark matter plus the small amount of ordinary matter did not add up to enough mass to make the universe flat, even though we see a flat universe without curvature (K=0). We should see the observed density as actually equal the density required for a flat universe, the critical density. Their ratio we call omega(Ω), which should then equal 1 for a flat universe. Yet the dark matter plus ordinary matter only add up to 30% of the critical density by the 1990s, so omega only equaled 0.3.
Many cosmologists thought it was just a matter of time before we found all the remaining 70%, probably dark matter, perhaps somewhere in between galaxies. But a growing number began to realize that this was not going to happen, so they began searching for other answers.
In the Friendmann equation ρ = H² + K, we roughly knew the Hubble constant (H), but the two other parameters remained unknown. They could try to measure K, which is what we'll think about in the next lecture. Or they could try to measure the energy density of the universe, rho(ρ). This is related to H, in that they compared the value for nearby galaxies with values for the furthest galaxies. The difference would tell them the rate that the expanding universe was slowing down due to its mass, and so weigh the universe.
Type Ia supernovae were standard candles in theory, due to the Chandrasekhar limit of white dwarfs. These could be observable in the furthest galaxies, while the cepheid variable stars are observable in nearby galaxies. The "period / luminosity" relation for cepheid variables is analogous to the "light curve / luminosity" relation for type Ia supernovae. Discovered in the 1990s by Mark Phillips, the period from brightening to fading, allowed the type and luminosity to be determined, thus distance.
The surprise was that the universe was found not to be slowing down, but speeding up. After the initial shock, it began to actually make sense! This was the missing 70% needed to make omega = 1. This solved the age problem, the structure problem, the CMB, the rho, and the deceleration questions!
I was fortunate to schedule many of these initial supernovae observations using the Hubble Space Telescope. It was tense work because the telescope pointing had to be tweaked at the last possible moment to coordinate with ground based observations. Members from both teams, sometimes even Saul Perlmutter himself, would talk with me about plans for sending coordinates from the observatory, which I would then convert and relay to the Hubble. One wrong digit and I would be in big trouble! The only mistake was on their end once, sending me a wrong coordinate, thus missing observations of the supernovae altogether. When they announced the initial results of the accelerating universe, it was so unexpected that I immediately thought this would someday be awarded a Nobel prize, and that appears to still be a safe bet.
Dark energy was not affected by gravity, so matter was ruled out. It was smoothly distributed in a persistent field, so radiation was ruled out. New solutions create new questions.
The question at the end of the course guidebook for this lecture asks why the Hubble constant (H) can be constant in an expanding universe. This is explained fully in lecture 16, so don't panic. They made a mistake by including it a few lectures too early.
We've reached that happy point in these series of lectures, where we start talking about dark energy as well as dark matter. Dark energy is something different from dark matter. It is discovered in different ways, it plays a different role in the cosmic story, it comes from something different.
So the fact that we need both dark energy and dark matter to explain the observations that we see in cosmology, is evidence that the dark sector of the universe, 95% of what the universe is made of, is interesting somehow! It's not just all the same stuff. It might be that someday we can subdivide the dark sector into more than two bits. Yet right now, we think that dark matter plus dark energy, is enough to explain everything we've been able to see in the universe so far.
However, the way we get to the discovery of dark energy, goes through thinking about dark matter. The thinking about dark matter goes all the way back to the 1930s, when Fritz Zwicky was looking at the dynamics in clusters of galaxies. He noticed that in the Coma cluster of galaxies, the motions of the individual galaxies were too fast to be explained just on the basis of the ordinary matter you saw there. By the 1970s, Vera Rubin looked at individual galaxies and realized that they also were spinning too fast to be associated with nothing but the visible matter.
So through the 1970s and 80s, people became absolutely convinced that there was something called dark matter, that the dark matter couldn't just be the ordinary matter that was hidden from us somehow. Yet the question remain, how much dark matter is there? Every time you looked at a larger system, you found more and more dark matter. In the 1980s therefor, a lot of people were convinced that we'd continue to find more and more dark matter.
For example, you can look at individual galaxies and clusters, yet how can you be sure that there isn't more stuff that is in between the galaxies and clusters? How can you be absolutely sure of that? Furthermore there was another reason to be skeptical. That comes from the Friedmann equation and the notion of the critical density of the universe.
So we look again at the Friedmann equation that relates stuff inn the universe, to the curvature of spacetime, in the case of an expanding, homogeneous universe.
((8πG)/3)(ρ) = H² + K
On the left, we see rho(ρ) which stands for the energy density of the universe. Now we're working under the approximation where everything is perfectly smooth. The right side shows the Hubble constant (H) that tells us about the expansion rate of the universe, and the spatial curvature (K). So if (K) is zero, and space is flat, if geometry is like Euclid said it was, that's a special value for the spatial curvature. It could be positive or negative. The other terms in the Friedmann equation should not be positive or negative, but want to be positive. The energy density is something that should be positive. We like positive amounts of energy, not negative amounts. That's dangerous.
The Hubble constant squared, doesn't matter what the value of H is, it's never going to be a negative number. So there's no special, interesting, particular values of the energy density (ρ) or the expansion rate (H), yet there is a special, interesting, middle value for the spatial curvature (K). It's zero.
However, when you plug in the numbers, the energy density you observe in the universe, in the matter of cluster and galaxies, doesn't seem to be equal to the special amount of density you would need to make the universe spatially flat. We can define the critical density as the density that you would need to satisfy the Friedmann equation when K equals zero, when there is no spatial curvature.
We can define that density, whether or not that's the density we actually have. In fact, cosmologists often define a number called omega (Ω), which is taking the actual density of the universe, and dividing by the critical density. So if the density is equal to the critical density, we say Ω=1. In fact, with the stuff we've found in the universe, the ordinary matter and the dark matter, only about 30% of the critical density is there in the matter of galaxies and clusters. So Ω seems to be 0.3.
That's a very strange number to have. A very nice number to have would be 1.0, since the universe would be spatially flat. A number like 10 to the ten billionth power, or for that matter, one ten billionth, would be numbers which wouldn't surprise us. They would just be some numbers, and we can't really explain them.
Yet 0.3 makes it seem like we're missing something. It's telling you that we're 30% of the way to being the critical density, and you remember that every time you look, you find more stuff. So throughout the 1980s, many cosmologists were convinced that we'd continue to find more stuff, and eventually find enough matter in the universe, to show that the energy was equal to the critical density. The value of 0.3 is close to 1.0, yet not equal to it. A lot of people just said, well we haven't found everything yet.
However, in the 1990s, that point of view became harder and harder to stick with. Technology became better, our ability to measure the energy in the universe in terms of matter, became more and more convincing. Especially with things like gravitational lensing, with x-ray maps of clusters of galaxies, we'd become convinced that the matter we'd found in the universe that wasn't adding up to 1.0.
The idea that a cluster of galaxies is a fair sample, is exactly the idea that the amount of dark matter in that cluster, compared to the amount of ordinary matter, is the same in that cluster as it is for the universe as a whole. If that's true, the amount of stuff we're finding in clusters of galaxies, implies that Ω is only 0.3, it's only 30% of the critical density, and does not quite equal 100%.
So if you were a respectable theoretical cosmologist in the 1990s, you would have begun to admit that this was true. Yet Sean can say that there were very few respectable theoretical cosmologists! The observers, who actually took the data, were becoming convinced that something was going on, yet the theorists were still clinging to the hope that somehow Ω was equal to 1.0.
Sean can remember personally giving a talk at the end of 1997, when he was asked to give a review talk on the cosmological parameters, the Hubble constant (H), Omega (Ω), rho (ρ), and so forth. Sean was one of these disreputable theoretical cosmologists that was personally convinced that Ω matter must be one, and we just hadn't found it yet. So when he sat down to look at all the papers which had recently been written, all the data that was collected, the talk he ended up giving said, "Well you know, something is going on." Maybe:
We do not live in a universe where cold dark matter makes Ω = 1.0.
Something weird is going on, so that either Ω is not 1, and we do not have quite the critical density,
Or perhaps it's not all cold dark matter. Maybe there's a mixture of cold and hot dark matter?
Or perhaps there's something weird in the early universe that made galaxies form in a strange way?
Or maybe there's more stuff than just matter in the universe?
Maybe there's this stuff that these days, we would call dark energy. So in late 1997, we were getting desperate. We had a whole bunch of thing on the table for what could possibly explain the data, yet didn't know which one of them was right. So what do you want to do to resolve this? You want to weigh the universe, and find out how much energy density there is in space, yet you want to weigh the whole universe, not just a bit of it here and there. You could always be missing something in between. So how can you weigh the entire universe all at once?
It turns out there are two techniques to use. One is to actually directly measure the spatial curvature (K). If you did this, you could find out whether the density you have is only 30% of the critical density, or whether it's 100%. We'll talk about that in the next lecture.
The other way is to measure the deceleration of the universe. You measure how the expansion rate of space, changes as a function of time. So this is something you'd expect to happen in ordinary cosmology. It's true that the universe is expanding, things are moving apart, yet while they do that, the different particles in the galaxies, the ordinary matter and the dark matter, is pulling on all the other particles. Stuff is exerting a gravitational force. So you expect that expansion rate to gradually slow down.
If there's enough stuff, it will in fact re-collapse. That would be Ω>1, so we had more than the critical density. So if you measured precisely the rate at which the expansion of the universe were changing with time, that would tell you the total amount of stuff in the universe. The challenge is just to actually do that. It's very difficult. How do you measure the rate at which the expansion of the universe is changing? How do you measure the deceleration of the universe?
Well you do what Hubble did, yet you just do it better! Hubble found the expansion of the universe, by comparing the velocity of distant galaxies to their distances. So Hubble's Law, that tells us velocity is proportional to the distance, is always going to be valid in a small region of the universe, cosmologically speaking.
Yet when you get out to a very far away galaxy, now you're looking at one that was emitting light from the distant past, by the time it gets to you. You're actually probing what the universe was doing at an earlier time, since light moves at only one light year per year. Therefor if you measure the distances and redshifts, the apparent velocities, of galaxies that are very far away, you can see whether or not the expansion rate has changed. You can measure the acceleration or deceleration.
So you want to do what Hubble did and use standard candles. If you have some object whose brightness is fixed, so you know how bright it really is, then by seeing how bright it appears to you, then you can calculate how far away it is. That's the basic idea of a standard candle. Hubble's standard candles were cepheid variable stars, pulsating stars for which you could figure out from the period of pulsation, how intrinsically bright the star really was.
The problem is that cepheid variables are not that bright. They're just the brightness of ordinary stars. You can't pick out, individual cepheid variables that are in very distant galaxies. Instead, you need a much brighter standard candle. So eventually what you appeal to are supernovae, exploding stars that are incredibly bright. We see an image of one of the most beautiful supernovae you'll ever see, SN1994d, and we can see that there's a galaxy with the supernovae in the bottom left. That is a star in that galaxy, not a nearby star in our galaxy. So the brightness of that supernovae is comparable to that of the entire galaxy it is inside, or just in front of. That's billions of times the brightness of an ordinary star.
That's the good news, yet the bad news is that they're rare. You don't see supernovae all the time, so you can't predict them. In a galaxy the size of the Milky Way, you're only going to get about one supernovae per century. The other problem is that supernovae are not standard candles, all by themselves. They are not all the same brightness. There are different kinds of supernovae.
When we discussed MACHOs and how you create neutron stars, there's a type of supernova called a core-collapse supernova (type II). You have a bright, heavy star, burning nuclear fuel. That fuel eventually burns out and the core of the star just collapses, exploding off the outer layers. Clearly for different masses of stars, when they collapse, their brightness is going to be different. So type II supernovae are not standard candles.
Yet from the name of type II supernovae, you might guess there's something called the type I supernovae. In fact, there's various different kinds of type I supernovae, and the particular kind that's called a type Ia supernova, can be used as a standard candle. A type Ia supernova is a very different object than a type II supernova. A type Ia comes from a white dwarf star, so this is what you get when a medium mass star gives out its nuclear fuel and just settles down to be a white dwarf.
Yet imagine that you're lucky, and you have a white dwarf star that has a companion. There's another star next to it, which in the course of its evolution, grows. The white dwarf begins to accrete some of the mass from its companion. We see a picture of an artist's conception of a white dwarf getting mass away from a nearby star, so that the mass of the white dwarf gradually grows and grows.
Yet there's a limit, as white dwarfs cannot just be arbitrarily massive. Eventually the gravitational field will become so strong that the white dwarf collapses. This limit is called the Chandrasekhar limit, and it's the same for every white dwarf, everywhere in the universe. So you can see that you have a hint in fact, that something is a standard candle. The place where the white dwarf collapses and the outer layers are blown off, forms a type Ia supernova. So it would not be surprising if every such event is more or less the same brightness.
It's true, they are plausibly standard candles. Type Ia supernovae could be approximately the same brightness. Yet there's various problems associated with the idea of using type Ia supernovae to measure the acceleration or deceleration of the universe. First, type Ia supernovae are not precisely standard candles. By looking at nearby supernovae, people noticed that type Ia's differed in brightness by about 15%. This doesn't sound like that much, but we're trying to look for a very subtle change in the expansion rate of the universe, so every 15% counts.
The real breakthrough in this field, came when Mark Phillips in the late 1990s, realized that just like cepheid variables, type Ia supernovae had a period/luminosity relationship. The supernovae doesn't pulsate, yet it does go up in brightness and then go down. What Phillips realized was that the time it takes to decline in brightness, told you what the maximum brightness was. The type Ia supernovae that are the brightest, are those that take the longest to decline. So if you measured not only the maximum brightness of the supernovae, yet also how it evolved, how the brightness declined as a function of time, then you could really pin-down that overall brightness, to better than 5%. Then you have something that is a good enough standard candle, to measure the deceleration of the universe.
The other problems are more like worries. One is, how do you know when you observe a supernovae that's very distant, if they were just as standard during the early universe, as they are today? That is something we'll have to deal with by taking a lot of data, and trying to figure out the real physics behind these objects. Yet more importantly, how do you even find them to start with? You could just look at a random galaxy,staring at it for 100 years. Then you have a 50% chance of finding one supernovae. Yet no one is going to give you telescope time to do that! So you need to come up with a better technique.
The thing is that only by the 1990s, did astronomical technology evolve to the point where we could find a whole bunch of supernovae all at once. So people developed techniques using large CCD cameras, which allowed you to take an image of a fairly wide swath of the sky. You want to take an image that is deep enough to get lots of galaxies, yet wide enough that you get a whole bunch of them, so you get galaxies at different redshifts in great numbers. Then you'll notice that the rise time of a supernovae, the time it takes to go from being very dim, to very bright, is a couple of weeks. That turns out to be a very convenient time.
You can then take an image of some region in the sky with literally thousands of galaxies in it, and then you come back again a few weeks later to take another image. In fact you can do the first image at new moon, when the sky is not affected by the bright moonlight, and then take the next image during the next new moon, and it works out perfectly. Then you want to compare these two images to look for one of the galaxies getting a tiny bit brighter.
So the picture we see now, is a little bit of a fake, since it's a better than average view, of what such a supernova discovery looks like. It's from the Hubble Space Telescope, rather than the ground based telescopes where most of this work is done. Yet you get a feeling for what's going on. We see an image on the left of the Hubble Deep Field, and another image on the right of the same region of the sky, yet taken several years later. You can tell there's a supernova in the image on the right, since there's an arrow pointing to it! That's the nice thing about these images, there's always arrows pointing to where to look! In this case, the arrow points to a fairly dim, red dot, which you can zoom in on, and find that indeed it's a supernova all by itself. This is a very distant supernova, and an especially good example, yet by using this type of technique and technology, you can find supernovae by the dozens.
So this project of finding a whole bunch of type Ia supernovae and using them to measure what the universe was doing at earlier times, was undertaken in the 1990s by two different groups. One group was centered at Lawrence Berkeley Labs, led by Saul Perlmutter. The other group was scattered around the globe, not really centered anywhere. Yet the leader of the group was an Australian astronomer named Brian Schmidt. So we see a picture of Brian Schmidt on the left, and Saul Perlmutter on the right, arguing over whose universe is accelerating or decelerating faster! (Both with their dukes up!)
It was a mostly friendly rivalry between the two groups, Perlmutter's on one side, Schmidt's on the other. Schmidt's group involved a lot of people, such as Adam Riess now at STScI who was the lead author on the most important paper, Bob Kirshner at Harvard was both Brian Schmidt's adviser and Adam Riess' adviser, so was sort of the intellectual godfather of the team! Alex Filippenko was another prominent member of the team, who was most famous for giving Teaching Company lectures on modern astronomy that Sean encourages us to have a look at.
So it was important that two groups were doing it, because if only one group did it, no one would believe them! Yet if two groups do it, and get the same result, then people are willing to think that something is going on which is at least on the right track! Indeed it was in 1998, only a few months after Sean gave his personal talk in 1997 when he said that something was going on, that these two supernovae groups in 1998 announced that something was going on, namely that the universe was not decelerating at all, but was accelerating! It was expanding faster and faster. The correct image of the universe is one in which galaxies have a velocity that is increasing as a function of time.
This came as a great surprise indeed! Most did not expect us to live in an accelerating universe. The Schmidt group, in fact, had a subtitle of "searching for the deceleration of the universe!" What they found instead was the "acceleration of the universe." So you had a very strange situation where on the one hand, the result was a complete surprise, yet on the other hand, it make perfect sense. The reason that people were willing to believe this result, besides the fact that two very good groups got the same result, was that it made things snap together. It answered a lot of questions all at once, as we'll explain.
However, the fact that the universe is accelerating, is a physical challenge. It's not what you expected. There's an intuitive argument that as the universe expands, it should be decelerating because particles are pulling on each other. If particles should be pulling on each other, how in the world do you explain an apparently observed phenomenon that the universe is accelerating rather than decelerating? The answer is you need something besides particles. You need to invent a new kind of stuff!
If the Friedmann equation is correct, and the universe has nothing in it but matter and radiation, it doesn't matter what kind of matter and radiation you have, the universe will necessarily be decelerating. So either the Friedmann equation is not correct, according to these data, a possibility that we'll explore later on, or there is something in the universe that is neither matter nor radiation. We call that stuff dark energy.
So what does dark energy mean? We use the word dark energy and it sounds a little bit mysterious, yet we'll emphasize that even though there's a lot we don't know about dark energy, it's not just a placeholder for something going on that we just don't understand. So dark energy really does have some properties that are definite, and need to be part of any theory of what the dark energy is.
First, the dark energy is smoothly distributed through space. It is more or less the same amount of dark energy here in this room, than it is anywhere between the galaxies. At the very least, dark energy does not clump noticeably in the presence of a gravitational field. The reason why we know that is, if it did clump, we would have noticed the dark energy before, when we measured the energy densities of galaxies and clusters. Gravitational lensing and other means would have shown there to be more energy here than can be explained by the matter, and that would be the dark energy.
Yet we don't see that. The dark energy is the same amount inside a cluster, as outside the cluster. That's why we didn't see it when we looked with gravitational lensing and dynamical means. So you might guess that the dark energy could take a form like some kind of radiation, something moving very quickly. If anything moves fast like photons or neutrinos, it would not cluster into galaxies and clusters, so that would be smoothly distributed, just like the dark energy.
Yet the second important property of dark energy is that it is persistent. The energy density, the number of ergs per cubic centimeter of the dark energy, doesn't change as the universe expands. So that's the opposite of what radiation does. Radiation looses energy rapidly as the universe expands, so for dark energy you need something that doesn't go away. It is the fact that dark energy is persistent which explains the acceleration of the universe. That's why were convinced that the dark energy is really something different. It's not a kind of particle that is pulling on other particles, but it's a kind of field, a kind of substance, a kind of fluid that fills space. Its energy density doesn't go away as the universe expands.
Yet this is asking quite a bit! This is like going around and saying, "OK, we've discovered some stuff that is completely unlike ordinary matter, dark matter, or radiation." Nevertheless, over the course of 1998, people began to buy this story fairly quickly. Why are astronomers who are by nature quite skeptical people, willing to believe this remarkable claim? The real point is that it made everything suddenly make sense.
Like we said, in 1997 things were becoming difficult to understand. We had a prejudice that Ω matter, the density of ordinary stuff plus dark matter, should be 1.0. So we should have the critical density. Yet the observations simply weren't consistent with that. Plus there were other problems that made the universe in which we believe, to not quite make sense when compared with the data that we had.
One such problem was the age problem. Given the amount of stuff in the universe, and given the Friedmann equation, you can calculate how old the universe should be. It's an absolute requirement that the universe should be older than the stuff inside. When people calculated the age of the universe, and when they compared it to the ages of the oldest stars, they were often getting an answer that the stars were older than the universe. That didn't quite make sense. There were large errors bars on that, so it was not a very hard and fast conclusion, yet it still made people worry that we were missing something. In an accelerating universe, the universe today is older for a given value of the Hubble constant than it would be in a decelerating universe, so it made the age problem go away, just like that.
Another problem was large-scale structure in the universe. If you had a universe with nothing but matter in it, and the matter was the critical density, you form a lot of structure. There's a lot of matter around, it clumps very easily, and you have more structure in that hypothetical universe than we seem to be finding in the universe we observe. You could explain this by saying that we don't have that much matter, that we don't have the critical density. Yet another way to explain it, is to imagine that there's something that doesn't clump. There's some dark energy that is smoothly distributed and doesn't contribute to the growth of large scale structure.
Finally there is this business about the critical density. Like we said, there was a prejudice on the part of theorists, that the critical density was the nicest value for the total density to have. They were therefor hoping that observers would continue to find more matter, even though the observers were telling them that no, they hadn't found anymore. It turns out that once you find the universe to be accelerating, and you invoke the presence of dark energy as an explanation for this acceleration, you then ask how much energy then do you need in the dark energy? The answer is about 70% of the critical density!
In other words, the really nice thing about the dark energy, was that it provided exactly enough energy to make the total energy density of the universe, equal the critical density. So it wasn't just that we'd found a new element of the universe, but it was that we found what was an apparently complete picture, a complete inventory of the universe.
So this is where the pie chart that we began our lectures with, comes from. We have what is now called a concordance cosmology, a view of the universe in which 5% of the stuff in the universe is ordinary matter, 25% is dark matter, and 70% is dark energy. That simple set of ingredients, is enough to make the universe flat, to be the critical density, to get the age right, to correctly explain large-scale structure, the acceleration of the universe, the CMB, and get the matter density right in galaxies and clusters.
That is a lot of observations that come from a small amount of assumptions. That's why people were so quick to jump onto the concordance cosmology bandwagon. It also tells us something about our place in the universe. Not only are we not like Aristotle would have it, sitting at the center of the cosmos, we are not even made of the same stuff as the cosmos! We're only 5% of the universe. The kinds of things we're made of, are only 5% of the energy density of the stuff in the universe.
This is a big deal, a sufficiently big deal that it was recognized by Science magazine in 1998 as the breakthrough of the year. They invented a nice little picture of Albert Einstein, blowing the universe bubbles with his pipe, to illustrate the fact that this dark energy was in fact something that Einstein himself had contemplated, as we will talk about in later lectures.
Of course, even though it makes everything fit together, the dark energy and its role in concordance cosmology, are still dramatic claims. We would certainly not accept the evidence of the supernovae data, as sufficient to believe in this dramatic claim. We want to check things, like that the supernovae are telling us the right thing.
For example, the statement that the supernovae indicate the the universe is accelerating, is just the statement that the very distant supernovae are dimmer than we would have thought. So you can invent much more mundane ways to make supernovae dimmer. For example, perhaps there is a cloud of dust in between us and the supernovae that is actually just scattering some of the light. That is absolutely the kind of thing that the supernovae groups took very seriously. The point is that when dust scatters light, it doesn't scatter every wavelength equally, but scatters more blue light than red. So the light from the supernovae would be reddened if it passed thrhough dust.
One of the things the supernovae groups did was to check very carefully for reddening and other alterations of the spectra of the supernovae they were observing, yet they didn't find any. They also checked that the behavior of the supernovae they observed was the same, whether they came from small, little galaxies, or big galaxies, in clusters, outside clusters, and there was no environmental effect that was leading to the supernovae being different in one place from another.
The real killer check however, on this picture of the universe, would be to make a prediction using it, and then to go measure that prediction. So here is a prediction made from this concordance cosmology. If 5% of the universe is ordinary matter, 25% is dark, and 70% is dark energy, and it all adds up to the critical density, then the spatial curvature of the universe should be zero. The universe should be spatially flat. So far we haven't talked about any observational check on that, so the next lecture will take us through how we know whether or not the universe is spatially flat, and the answer is yes, it is spatially flat. In other words, even without the supernovae data, something is 70% of the energy density of the universe, and that something is the dark energy.
So that's good news that we understand something about the universe, that 70% of it is dark energy. The bad news is we don't know what that stuff is, on some deep level. So in addition to making sure that we're on the right track, that there is dark energy, theoretical physicists are now faced with the task of explaining what the dark energy is, where it came from, what it might be going into, how it can react with other stuff.
There is a simplest guess, which is Einstein's idea of the cosmological constant. We will take that guess seriously, and also look at alternative explanations to see which one fits the best.
0 comments:
Post a Comment