Isaac Asimov is often remembered as a reassuring writer. Robots that follow rules. Futures that can be planned. Problems that yield to intelligence. The End of Eternity is the book where that confidence fractures. This video explores one of Asimov’s most uncompromising novels, not as a piece of nostalgic science fiction, but as a sustained argument about safety, control, and moral responsibility. It is a book that asks an uncomfortable question that feels increasingly relevant today: what happens when protecting humanity becomes more important than trusting it? At the heart of The End of Eternity is a simple, seductive idea. If we can prevent suffering, why would we not? If disasters can be avoided, why allow them to happen? If the future can be managed carefully, why leave it to chance? Asimov takes that logic seriously, follows it all the way to its conclusion, and then asks us to live with the consequences. This is not a plot summary. It is not a celebration of clever time travel mechanics. And it is not a comfortable defence of technological optimism. Instead, this video looks at how The End of Eternity dismantles the idea that optimisation is morally neutral. It examines how systems designed to reduce harm can quietly erase agency, ambition, and responsibility. It looks at why well-meaning decisions can still be destructive, and why a future that never risks failure may not be a future worth preserving. If you enjoyed The Gods Themselves, this video follows a similar line of enquiry. Not because the books are the same, but because they unsettle for the same reason. They both sound reasonable right up until the moment they stop being comforting. This video avoids spoilers, but it does not avoid the central ideas. If you are interested in science fiction that treats ethics seriously, this discussion is for you. If you are uneasy about the growing reliance on models, metrics, and systems that claim to know better than people do, this book has something sharp to say. And if you think safety is always an unambiguous good, this video may challenge that assumption. If you find the argument interesting or useful, please consider liking the video. It helps more than you might expect. Subscribing supports the channel and ensures you see future videos on science fiction, philosophy, and uncomfortable ideas hidden inside familiar stories. Most importantly, I would love to hear what you think. Would you accept a future that is safer but chosen for you? Where do you draw the line between protection and control? And is there anything you would refuse to optimise away, no matter how persuasive the data? Leave a comment and join the discussion. About the channel This channel looks at science fiction as a way of thinking, not just a genre. The focus is on ideas, ethics, and the questions stories leave behind after the last page. New videos are published regularly. #gibsononbooks #scifibookreview #booktube