Some thoughts on AI
1. Timelines
- Q: Do we expect AI to cause human society to collapse so soon that we need to change our policy recommendations (or just sit on a large pile of food inside a remote bunker while we wait for the nukes to fly)?
- A: This doesn’t seem to be supported by the evidence.
AI will probably continue to develop and become more advanced. However, it’s completely unclear when and how AGI will emerge and/or be deployed, as there are numerous complicating factors (both technological and societal) - things are really complicated. Expert surveys with short timelines, or individuals with high probabilities on very specific sequences of events, are probably not the best source of information.
Illustrative quotes:
- “Working from the definition of AGI provided, the median forecasts for the questions on whether AGI will exist by 2043, 2070, and 2100 are 12%, 40%, and 60%, respectively. Some of the key drivers behind the forecasts are: (1) logistics and the availability of training data as an input for AI, (2) current pace of development, (3) incentives, (4) the extent to which the time horizons are sufficient for necessary technical breakthroughs, and (5) the complex effect of societal disruptions, including possible great power wars in this period.” (Superforecasting AI)
- “Assuming AGI exists by 2070, participating Superforecasters see a 6% probability that humanity will either go extinct or have its future potential drastically curtailed due to loss of control of AGI by 2200. Some of the key risks and drivers identified are: (1) the ability to ensure alignment, (2) the speed of deployment, and (3) the possibility and effectiveness of regulation and oversight. They also point out that (4) extinction may be a high bar, given human resilience, the spread of populations across the globe, and human ability to live in low-tech environments.” (ibid)
- “There has been fantastic progress in LLMs’ capabilities, but AI still has a long way to go. We’ll likely need a different model to achieve AGI, but the current enthusiasm for LLMs will ensure lots of money and talent will be poured into AI this decade, which may get us there.” (ibid)
- “Looking back 47 years, nobody could have seriously predicted what would be possible today—and how that would be possible. Computers with the computing power of modern refrigerators filled huge rooms back then. I don’t think this will happen on the exact path that ChatGPT or GPT-3/GPT-4, Minerva, etc., are on. But 47 years is enough time for something completely new to revolutionize the field.” (ibid)
- “What’s holding me back is more general uncertainty around the overall stability of society (societal acceptance, environmental damage, etc.) to continue the R&D on AGIs.” (ibid)
- “Some Superforecasters point out that the current studies suggesting a high probability of an imminent emergence of AGI have certain limitations” (ibid)
- “2100 is so far out that the Maes–Garreau law does not apply. However, the inherent uncertainties will remain. I have no idea what path would lead to AGI and thus how to replicate it. It did take evolution a while to get to ‘intelligence,’ so it’s not an easy problem to solve. Without even a glimmer of a path, it is hard to estimate how long it’ll take to traverse it.” (ibid)
- (Given AGI by 2070, will humanity go extinct or similar by 2200?) “Superforecasters assign this scenario a 6% probability. Some of the key risks and drivers they have identified are: (1) our ability to ensure alignment, (2) the speed of deployment, and (3) the efficacy of regulation and oversight”. (ibid)
2. Becoming obsolete as AI develops
- Q: Will AI become so good that I’m driven out of a job?
- A: Maybe, probably not.
AI might cause things to change. This is true for everybody. There’s probably not much use in worrying about being made obsolete - AI could similarly create new jobs. Therefore, a wiser attitude is to keep an eye on developments and how new advances can be used to improve my own work.
Illustrative quotes:
- “Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks. […] We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications.” (Eloundou et al. 2023)
- “‘Whilst a consensus is forming on the impact robots could have on people’s livelihoods there is also the frequently heard counter-argument that new jobs will be created and new products will be produced. Robots will also allow people to focus on aspects of jobs that they are better at, that they may prefer and would allow an extension to their working lives (Hinks, 2020, n.p).’ […] What is omitted from discussions of job automation and its effects is the fact that, despite the fact that artificial intelligence has reduced the number of jobs available as a result of automating productive processes, it has also created a lot more new jobs. This is in line with a Deloitte report titled “Technology and People: The Great Job-Creating Machine,” which found that automation has been “the great job-creating machine.” It claimed that the past 200 years show that, strangely, quicker growth and, eventually, rising employment, rather than job losses, occur when a computer replaces a human.” (Ekwueme, Areji, and Ugwu 2023)
3. Using current AI tools in my work
- Q: Should I be using existing tools in my current work? If so, how?
- A: Probably. Specific applications that interest me are: brainstorming; providing counterarguments; editing text; generating catchy titles and headlines; summarising text; writing code; extracting data from text; reformatting data.
AI seems really good at saving time by automating micro-tasks that don’t require too much oversight or checking. I think a wise attitude is to take advantage of LLMs in areas where they do offer advantages (generating content and processing large amounts of text) while being mindful about the risks (poor ability to evaluate and discriminate content; privacy concerns).
Therefore, I will explore tools that enable me to automate micro-tasks that are time-consuming in my day-to-day work or otherwise enrich my perspective on issues (with a preference for tools with strong privacy policies). Specifically, Phind should suffice for writing code. Gemini should suffice for: brainstorming; generating catchy titles and headlines; editing text; providing counterarguments (particularly mimicking moral exemplars or trustworthy people I know personally); summarising text; extracting data from text; reformatting data; writing code.
Quotes from key sources:
- “Great leaders are also those who are able to pause and make sure they understand the truth of the hype, internalizing the uncertainties of new technologies to balance the opportunities and unmanaged risks.” (Bughin 2023)
- “LLMs increasingly have comparative advantage in generating content; humans currently have comparative advantage in evaluating and discriminating content. LLMs also have super-human capabilities in processing large amounts of text.” (Korinek 2023)
- “It is easy - and dangerous - to overestimate the capabilities of LLMs. […] It is easy - and dangerous - to underestimate the capabilities of LLMs.” (Korinek 2023)
- “AI can generate a text with mistakes, including incorrect math, reasoning, logic, factual information, and citations (even producing references to scientific papers that do not exist).” (Chemaya and Martin 2023)
See also the Faunalytics policy on AI use, which strikes me as very thoughtful:
- Faunalytics permits staff to use LLMs for low-stakes tasks.
- “However, all staff are prohibited from sharing/input any sensitive information as part of their prompts, since such information becomes integrated into training data and could be exposed to others.”
- “LLMs and chatbots will not be used to directly generate text for our Research Library summaries or Original Research projects, whether internal or public-facing. It is our view that the current capabilities of LLMs preclude them from being particularly useful for us in such areas, and indeed, their potential for inaccuracy could create more problems and require even closer editing than text written by us directly.”
- “Based on the reputational risks posed by AI image generation, Faunalytics will avoid the use of image-generation tools for photorealistic images in our public-facing work. We likewise caution individual advocates and groups about using image generation tools, as they have the potential to seriously erode public trust in our individual organizations, and our movement more broadly.”
Afterword
I think a really useful thing to keep in mind is this photo of Beverton and Holt, who established the quantitative science of fisheries management, working on a fisheries model in 1949:
References:
- Bughin, J. 2023. “To ChatGPT or Not to ChatGPT?” Available at SSRN 4411051. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4411051.
- Chemaya, Nir, and Daniel Martin. 2023. “Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals.” arXiv [cs.CY]. arXiv. http://arxiv.org/abs/2311.14720.
- Ekwueme, Francis Okechukwu, Anthony C. Areji, and Anayochukwu Ugwu. 2023. “Beyond the Fear of Artificial Intelligence and Loss of Job: A Case for Productivity and Efficiency.” Qeios. https://doi.org/10.32388/3bwnxg.
- Eloundou, Tyna, Sam Manning, Pamela Mishkin, and Daniel Rock. 2023. “GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.” arXiv [econ.GN]. arXiv. http://arxiv.org/abs/2303.10130.
- Korinek, Anton. 2023. “Language Models and Cognitive Automation for Economic Research.” Working Paper Series. National Bureau of Economic Research. https://doi.org/10.3386/w30957.