I have used it twice (with full disclosure to the client) on a couple of items my clients wanted to know about but did not want to pay for a white paper. I used one of the programs, read it, verified any facts I was not familiar with (there may have been one) and sent it on to the client. They were happy and I was happy because I don’t like writing white papers…
I think in many ways it will be the next iteration of industrial automation. For example, rather than programming manufacturing robots or control systems generally to do repetitive tasks, they are able to take it much further and perform tasks that require contextual decision making. This could manifest in a variety of ways in cities (think traffic light controls maybe). But as has been discussed in many places, this kind of AI for vehicles is not going to be a panacea for all the ills of auto-based transportation. Yes, it will solve some problems, but will likely create just as many new ones; the "unknown unknowns".
To be honest, to me right now AI is in that miracle-it's-gonna-solve-everything part of its lifecycle. It's going to write novels, it's going to revolutionize fine art, it's going to answer all of life's mysteries.
At a time when I can't count on the supermarket self-checkout to reliably handle a marked-down item or a wrinkled bar code and when I still receive credit offers in the mail from companies working on mailing lists that are a decade old, it seems a lot to ask of a nascent technology.
Pull back on the hype, come up with some reasonable applications first (like making sure my car navigation doesn't refer to our local County Roads C, D, and E as County Road C, D, and East) and I'll call it a technology win. Let's walk before we run.
It's both a great use of technology and also dangerous. By definition, it is pulling information from what information is available to it. It's a classic: data in = data out, which means it depends on what data has been put in. This, by default, will tend to regurgitate older or more status quo viewpoints. Any thought leadership that is looking to identify problems or suggest new or different ways of doing things will be discounted because there is less information about it. I have seen this in a test by a parenting leader that I follow. ChatGPT reflected the bias inherent in the data it was referencing, so without careful review of the output, that bias is further circulated. Here's the blog post: https://www.screenagersmovie.com/blog/new-cheating-breakthrough-can-kids-resist
People who aren't familiar w/ AI (text, images, video, or any combo) have no idea the questions to even ask. I hope I'm able to be at your CNU session. I think one of the fundamental points is to describe what a large language model is. It'll help normies understand this isn't a scary mystical force, but an incomprehensibly fast synthesizer of information that reacts without emotion to prompts it's given.
Building on what Karen and Andy already wrote -- we are all going to have to learn HOW to ask the RIGHT questions. (Much like many of us learned when working with traffic and transportation engineers, the wording of your question may produce a different response. i.e. "this change will create a 20% trip delay vs. a 30 second delay...") Will we be forced to learn the AI language? Or will AI quickly learn our language?
Within the first decade of adoption I don't think as many jobs are going to disappear as most tech companies and leaders seem to assume. It's also hard to predict what new jobs will be created through AI applications. It's almost a certainty that food service, groceries, retail, etc. will likely get by with fewer employees, but this shift may come with benefits as well. It could allow for increased pay, expanded roles, and for putting employee resources towards new areas, where before they were limited by existing required tasks and tighter budgets. Still so many unknowns.
As for white collar work, as impressive as GPT-4 is, it still does not appear to be very adept at nuance, in-depth understanding, incorporating context, and understanding how the human experience relates to the task at hand. These are all incredibly important skills in government, management, sales, financial services, information services, analysis and research, etc. Perhaps less important in the hard skills of fields like coding, accounting, physics, mathematics, engineering, but still very important when it comes to the soft skills of those jobs, as well as problem solving. Specifically where elements of design, creativity, and non-linear thinking come into play. Knowing what questions are the right questions to ask. Knowing how the architecture should fit together, and how it should be managed and function over the long-term. AI will be instrumental in streamlining and supporting computational elements of the tasks.
As an aside, a yet to be answered question and potential problem, is the fact that this computational work is how students and junior employees truly learn the ins and outs of a hard skill like coding, engineering, etc. and there maybe some element lost in lessening that deeper understanding with AI. Or it maybe more like computers and calculators and only mean that the knowledge base shifts and is able to grow faster than before the computational help. It's hard to say at this point.
I think AI will lead to a revolution in productivity and in assigning the more rote, routine, and time consuming tasks. Human oversight still very much engaged with aligning and designing the final output. This will likely be true of almost all white collar jobs, which still require those human elements of creativity, empathy, nuanced understanding, etc. Much like computers and the internet before it, the bulk of the change from AI will likely be productivity gains.
In our field of urban planning and city design, I think the tech promise is probably bigger than the real outcome (smart cities: the sequel), but it's still going to represent a massive change to the field in a fundamental way. In the same way that the data of a smart cities approach, helped us to confirm strategies (like what to focus on in order to reduce deadly car crashes, and where to implement interventions to see the biggest impact), AI will bring insights that help us do a better job of connecting measurable goals with real world outcomes. Additionally, AI will streamline and at least partially automate tasks like reports, spreadsheets, mapping, financial analysis, project prioritization, geographic analysis, permits, development approvals, pro formas, presentations, etc. However, where the creativity happens within these tasks, and how they are individualized for a particular community, project, culture, goal, etc. will still require human minds, for at least the rest of our lifetimes, and I would venture to guess long after that.
While AI represents a productivity game changer, it's also a potential threat to good design and planning in the same way modernism and auto-centric design were. We risk forgetting elements of design, learned over centuries, and thrown out for the sake of the shiny brand new thing. In the 20th century we made the mistake of seeing a technological change and assuming a human change. Instead we learned hard lessons about fairly static rules surrounding what works for communities and people in terms of city design. We face massive risks to society if that collective, shared, and organic wisdom about what works to build strong, resilient communities, functional housing markets, and active, engaged citizens, are not heeded. Many of these elements, we're still really learning, building, and progressing on what seems to work, and AI could be used to help this progress along. Or it could pose a risk to unlearning it. As is the case with nearly all technology, AI has the potential to greatly improve or greatly damage society, depending on how it's used. And given how powerful it is, the risk of getting it right vs getting it wrong, is potentially the greatest humanity has ever faced. Good luck us. No pressure.
I have used it twice (with full disclosure to the client) on a couple of items my clients wanted to know about but did not want to pay for a white paper. I used one of the programs, read it, verified any facts I was not familiar with (there may have been one) and sent it on to the client. They were happy and I was happy because I don’t like writing white papers…
Rob
I think in many ways it will be the next iteration of industrial automation. For example, rather than programming manufacturing robots or control systems generally to do repetitive tasks, they are able to take it much further and perform tasks that require contextual decision making. This could manifest in a variety of ways in cities (think traffic light controls maybe). But as has been discussed in many places, this kind of AI for vehicles is not going to be a panacea for all the ills of auto-based transportation. Yes, it will solve some problems, but will likely create just as many new ones; the "unknown unknowns".
To be honest, to me right now AI is in that miracle-it's-gonna-solve-everything part of its lifecycle. It's going to write novels, it's going to revolutionize fine art, it's going to answer all of life's mysteries.
At a time when I can't count on the supermarket self-checkout to reliably handle a marked-down item or a wrinkled bar code and when I still receive credit offers in the mail from companies working on mailing lists that are a decade old, it seems a lot to ask of a nascent technology.
Pull back on the hype, come up with some reasonable applications first (like making sure my car navigation doesn't refer to our local County Roads C, D, and E as County Road C, D, and East) and I'll call it a technology win. Let's walk before we run.
It's both a great use of technology and also dangerous. By definition, it is pulling information from what information is available to it. It's a classic: data in = data out, which means it depends on what data has been put in. This, by default, will tend to regurgitate older or more status quo viewpoints. Any thought leadership that is looking to identify problems or suggest new or different ways of doing things will be discounted because there is less information about it. I have seen this in a test by a parenting leader that I follow. ChatGPT reflected the bias inherent in the data it was referencing, so without careful review of the output, that bias is further circulated. Here's the blog post: https://www.screenagersmovie.com/blog/new-cheating-breakthrough-can-kids-resist
People who aren't familiar w/ AI (text, images, video, or any combo) have no idea the questions to even ask. I hope I'm able to be at your CNU session. I think one of the fundamental points is to describe what a large language model is. It'll help normies understand this isn't a scary mystical force, but an incomprehensibly fast synthesizer of information that reacts without emotion to prompts it's given.
Building on what Karen and Andy already wrote -- we are all going to have to learn HOW to ask the RIGHT questions. (Much like many of us learned when working with traffic and transportation engineers, the wording of your question may produce a different response. i.e. "this change will create a 20% trip delay vs. a 30 second delay...") Will we be forced to learn the AI language? Or will AI quickly learn our language?
Within the first decade of adoption I don't think as many jobs are going to disappear as most tech companies and leaders seem to assume. It's also hard to predict what new jobs will be created through AI applications. It's almost a certainty that food service, groceries, retail, etc. will likely get by with fewer employees, but this shift may come with benefits as well. It could allow for increased pay, expanded roles, and for putting employee resources towards new areas, where before they were limited by existing required tasks and tighter budgets. Still so many unknowns.
As for white collar work, as impressive as GPT-4 is, it still does not appear to be very adept at nuance, in-depth understanding, incorporating context, and understanding how the human experience relates to the task at hand. These are all incredibly important skills in government, management, sales, financial services, information services, analysis and research, etc. Perhaps less important in the hard skills of fields like coding, accounting, physics, mathematics, engineering, but still very important when it comes to the soft skills of those jobs, as well as problem solving. Specifically where elements of design, creativity, and non-linear thinking come into play. Knowing what questions are the right questions to ask. Knowing how the architecture should fit together, and how it should be managed and function over the long-term. AI will be instrumental in streamlining and supporting computational elements of the tasks.
As an aside, a yet to be answered question and potential problem, is the fact that this computational work is how students and junior employees truly learn the ins and outs of a hard skill like coding, engineering, etc. and there maybe some element lost in lessening that deeper understanding with AI. Or it maybe more like computers and calculators and only mean that the knowledge base shifts and is able to grow faster than before the computational help. It's hard to say at this point.
I think AI will lead to a revolution in productivity and in assigning the more rote, routine, and time consuming tasks. Human oversight still very much engaged with aligning and designing the final output. This will likely be true of almost all white collar jobs, which still require those human elements of creativity, empathy, nuanced understanding, etc. Much like computers and the internet before it, the bulk of the change from AI will likely be productivity gains.
In our field of urban planning and city design, I think the tech promise is probably bigger than the real outcome (smart cities: the sequel), but it's still going to represent a massive change to the field in a fundamental way. In the same way that the data of a smart cities approach, helped us to confirm strategies (like what to focus on in order to reduce deadly car crashes, and where to implement interventions to see the biggest impact), AI will bring insights that help us do a better job of connecting measurable goals with real world outcomes. Additionally, AI will streamline and at least partially automate tasks like reports, spreadsheets, mapping, financial analysis, project prioritization, geographic analysis, permits, development approvals, pro formas, presentations, etc. However, where the creativity happens within these tasks, and how they are individualized for a particular community, project, culture, goal, etc. will still require human minds, for at least the rest of our lifetimes, and I would venture to guess long after that.
While AI represents a productivity game changer, it's also a potential threat to good design and planning in the same way modernism and auto-centric design were. We risk forgetting elements of design, learned over centuries, and thrown out for the sake of the shiny brand new thing. In the 20th century we made the mistake of seeing a technological change and assuming a human change. Instead we learned hard lessons about fairly static rules surrounding what works for communities and people in terms of city design. We face massive risks to society if that collective, shared, and organic wisdom about what works to build strong, resilient communities, functional housing markets, and active, engaged citizens, are not heeded. Many of these elements, we're still really learning, building, and progressing on what seems to work, and AI could be used to help this progress along. Or it could pose a risk to unlearning it. As is the case with nearly all technology, AI has the potential to greatly improve or greatly damage society, depending on how it's used. And given how powerful it is, the risk of getting it right vs getting it wrong, is potentially the greatest humanity has ever faced. Good luck us. No pressure.