SEOUL (Reuters) – Open AI Chief Executive Sam Altman is set to meet with South Korean President Yoon Suk Yeol and about 100 local startups on Friday, as the country seeks to encourage domestic competitiveness in artificial intelligence.
After crisscrossing Europe last month meeting lawmakers and national leaders to discuss the prospects and threats of AI, Altman has travelled to Israel, Jordan, Qatar, United Arab Emirates, India and South Korea – all this week.
South Korea is one of few countries that has developed its own foundation models for artificial intelligence in a field dominated by the United States and China, thanks to local tech firms such as Naver, Kakao, and LG.
The firms are seeking ways to tap niche or specialised markets that have not yet been addressed by big tech in U.S. or China.
Naver said it has been keen to develop localised AI applications for countries with political sensitivities in the Middle East as well as for non-English speaking countries, such as Spain and Mexico, the Financial Times reported in May, citing a Naver executive.
The rapid development and popularity of generative AI since Microsoft Corp-backed OpenAI launched ChatGPT last year is spurring lawmakers globally to formulate laws to address safety concerns linked to the technology.
The European Union is moving ahead with its draft AI Act, which is expected to become law later this year, while the United States is leaning toward adapting existing laws for AI rather than creating whole new legislation.
South Korea has new AI regulation awaiting full parliament approval, which is seen as less restrictive than the EU’s version.
In February, a parliament committee passed an AI law draft that guarantees freedom to release AI products and services, and will only restrict them if regulators deem any product to be harming the lives, safety, and rights of people.
South Korea’s Ministry of Science and ICT announced in April plans focused on fostering local AI development, such as measures to provide datasets for training hyperscale AI, while continuing discussions in AI ethics and regulations.
(Reporting by Joyce Lee and Heekyong Yang, editing by Deepa Babington)