Encyclopedia … combined with a great Buyer's Guide!

Can AI Bots Like Chat-GPT and Bard Answer Photonics Questions?

Posted on 2023-09-06 as part of the Photonics Spotlight (available as e-mail newsletter!)

Permanent link: https://www.rp-photonics.com/spotlight_2023_09_06.html

Author: Dr. Rüdiger Paschotta, RP Photonics AG, RP Photonics AG

Abstract: The main software products of RP Photonics have obtained a new help system based on HTML files displayed in a web browser. That will substantially benefit users, making it easier to find and read the comprehensive documentation. Existing users can get free software updates.

Dr. Rüdiger Paschotta

More and more people have started at least to play around with AI chatbots like OpenAI's Chat-GPT, Google Bard and Bing Chat. It is amazing what has been achieved with such large language models (LLM); that indeed often feels like dealing with artificial intelligence. So what exactly can they do and where are their limitations? Here, I report on my experience collected with quite a few tests – mostly with Chat-GPT 4, recently also with Google Bard and Bing Chat – and on my interpretations. Although I am certainly not an AI expert, I think I understand enough on the operation principles to come to reasonable conclusions, besides being able to judge answers obtained from AI.

Writing a Python Function

One of my early tests was asking Chat-GPT for some Python function with a user interface – a window with a variable number of checkboxes and various functional buttons. I explained the requested functionality in several sentences; it was not particularly complex, but certainly not trivial. Within a few seconds, it nicely presented some code. I tried that out, and it didn't work. But when I told Chat-GPT about the obtained error message, it apologized and returned a corrected code, which now worked well. It's amazing – maybe more than if it had straight away delivered the right code: it seemed to understand the problem and fixed it.

Obviously, it can be quite time-saving to have such an assistant when programming – although you surely need to check carefully any code you get. I feel that the productivity of a good programmer can be increased quite substantially with such tools (and probably even more though in a few years), although you definitely still need a competent person. Some non-trained coder would find it hard to work even with AI assistance: it is not trivial first of all to ask the right questions, then to fix various remaining problems and to ensure that the resulting program works reliably.

Reformulating Physics Texts

Language models are specifically made for operating on texts, so I tried a few variants of such operations. One goal was to improve the language of some explanations in photonics. In other cases, I wanted a summary with a given approximate length, or a translation to German.

Generally, that worked well. The style of the text can to some extent be controlled with detailed instructions in the prompt – for example, saying that it should be the style of a technical article for a technically educated audience. However, in a few cases, particularly some concerning explanations on difficult subjects, it happened that the produced text was logically less precise and clear than the original text. So one really needs to carefully check the obtained text e.g. before publishing it, as inaccurate or even misleading results may be obtained.

Physics Questions

I did further tests with questions in the context of physics – certainly less common topics than Python programming! Not surprisingly, the results were overall not that great.

Thermal Electronic Noise

When I asked Chat-GPT to explain the derivation of the Nyquist noise power formula for thermal electronic noise, I obtained answers which might have looked reasonable to some people not knowing much about the topic. However, there were substantial deficiencies:

  • It claimed a connection with Nyquist's sampling theorem – which indeed has little to do with the topic, although it is same person (Harry Nyquist).
  • It presented an equation for the thermal noise power without explaining in what situation that noise power would occur, but that is vital. For example, if you don't connect anything to a resistor, or just short-circuit it, there will be no noise power generated. The common equation applies to an impedance-matched load being attached to the resistor.
  • The presented “derivation” just claimed a first equation without any physics arguments. Based on that, it arrived at a noise power which is 4 times higher than what was presented before, which was now called a more common “simplified” version. When I asked about that factor of 4, it apologized and claimed that it really must be there.
  • I also asked back concerning the interpretation of that noise power. The answers were wordy but largely wrong. For example: 'The Nyquist noise power is an inherent property of resistors and conductors, and it is not “delivered” to other components in a conventional sense.' However, if you connect two equal resistors to each other, each one sends a certain noise power to the other one, and that is determined by the original equation without that factor of 4.

I was not surprised about that failure, knowing that the literature contains numerous texts on thermal noise which are inaccurate or even quite wrong. It appears to be one of those topics where too many people copy stuff from others also not understanding the matter. And this is what chatbots usually are trained on. The basic principle is: garbage in – garbage out!

Questions on Laser Technology

When you ask some simple questions on laser technology, chatbots often perform reasonably well. For example, it may explain reasonably well how the guiding of light in an optical fiber works.

A still simple question to Google Bard was: Explain the working principles of fiber amplifiers. I was puzzled to read strange things like that: “When the erbium ions are pumped with light from a laser, they are excited to a higher energy level. When they return to their ground state, they emit photons of light at the same wavelength as the pump light. These photons can then be amplified by stimulated emission, which is a process in which a photon of light stimulates the emission of another photon of light with the same wavelength.”

When I asked how that would lead to the amplification of a signal, it fantasized: “… When these excited erbium ions then encounter a photon of light at the same wavelength as the pump light, they are stimulated to emit a photon of light with the same wavelength. This process can happen many times, resulting in a chain reaction of photon emissions that amplifies the signal light.”

I also tried this question with Bing Chat. This is working in a substantially different way, called retrieval: it first searches in the Internet and then formulates answers based on these materials, also quoting sources. The second paragraph of its response (after some general intro):

“The working principle of an EDFA is based on the absorption and emission of light by erbium ions that are doped into the core of an optical fiber. When a signal-carrying light beam enters the doped fiber, it excites the erbium ions to a higher energy level. The excited ions then release their energy in the form of additional photons that are in phase with the original signal, thereby amplifying it (1).”

It cited my encyclopedia article on fiber amplifiers – which however certainly does not endorse that explanation: of course, it is the role of the pump light rather than the signal light to excite erbium ions. It sounds as if signal photons could do that job alone with no effort, producing additional signal light out of nothing, effectively acting as a perpetuum mobile. Further, I certainly wouldn't have claimed that additional photons are “in phase with the original signal”: what, after all, is the phase of a photon?

I then asked back: “Does Ref. 1 really claim that signal photons excite the erbium ions and in that way generate more signal photons? Further, does it talk about additional photons being in phase with the original signal?” It then did not answer the question, just confirmed its claim, but now referenced a different web page (from FiberLabs Inc.), which also correctly explains the mechanism and does not support Bing's strange claim. I then asked back again: “If signal photons themselves excite the erbium ions, could we actually get amplification without involving pump light?” Bing's answer: “The excitation of erbium ions by signal photons is a spontaneous process that occurs naturally in an erbium-doped fiber. However, the number of excited ions is typically very small, and the resulting amplification is negligible (1). Therefore, pump light is required to excite a large number of erbium ions and create a population inversion between the ground state and the excited state, which is necessary for significant amplification to occur (1,2).” The reference pages are the former two (FiberLabs and my article).

This demonstrates that while it is useful to find good sources with such an “intelligent” search, which might also tell more than a usual search engine results what to expect from specific references, one really needs to check these sources, as what you find there may well differ from what was claimed by the bot.

Remarkably, the language models normally do not surrender – i.e., admit their inability to answer questions – even in cases where they fail completely. Well, in some other even simple cases they do surrender, but the bigger problem is, of course, getting confident replies which completely lead you astray.

As a more difficult test, I asked Chat-GPT to explain the concept of intracavity pumping of lasers. After some general laser intro, it said:

“In the context of these concepts, intracavity pumping involves placing the pump source (which could be another laser or a flashlamp, for example) inside the optical cavity. This method allows the active medium to be pumped continuously as the light circulates within the cavity, which can potentially increase the efficiency of the energy transfer and result in higher output powers.”

Later, it also said that a broad-area diode laser could be placed within the cavity of a fiber laser. When I asked how to do that, it elaborated more strange ideas about this “fascinating concept”.

I now also tried that with Google Bard. It started with a reasonable statement, emphasizing that the pump light can make multiple passes through the gain medium – before being emitted as laser light. The last part is strange. Then it correctly said that it is an interesting technique for low-gain lasers, and gave fiber lasers as an example, although just those usually exhibit high gain and need little pump intensity. On follow-up questions like “How to place a laser diode inside a fiber laser cavity?” (because it had suggested using a laser diode for intracavity pumping of a fiber laser), Bard only produced more nonsensical answers.

Bing Chat was far better in this case, generating a fully correct answer with a good example case (Ho:YAP gain medium within the resonator of a Tm:YAP laser).

So such searches work fine in some cases, but it was indeed fascinating to see how complete nonsense was often worked out in more detail, rather than simply saying that it can't answer that. Well, the principle of a large language model is not to somehow start out with proven principles, but just to start with a little text and then always reason what continuation of that text is most likely. What is astounding is actually not that it fails in various cases, but rather that this principle works quite well in many other cases.

Energy

I also did various tests in the area of energy supply and recently published a German article on that topic on my German energy website. The results were tentatively worse than those with questions on more scientific questions – presumably because more incompetent people write on energy matters.

A funny observation was that when I asked Google Bard to improve a text in which Bard's performance was criticized, it changed the statements to blame Chat-GPT! Quite clever – or just resulting from the fact that Chat-GPT was far more likely to be criticized in Bard's training materials.

Conclusions

The crucial point is that large language models have been developed for processing language, rather than for researching facts. Full stop.

These models therefore work well for processing texts – ideally, reliable input texts which you provide yourself, and which only need to be transformed in some way: summarized, reformulated for a different style, translated to another language, etc. In fact, the quality of such operations in 2023 is amazingly good.

However, many people are more interested in finding facts. Here, chatbots still work reasonably well in some simpler cases, where the questions together with correct answers are sufficiently frequently found in the used training texts (largely taken from the Internet). However, in areas where a lot of inaccurate stuff is spread, that is usually returned by the bots. And if the user doesn't know enough himself or herself already, the result is being mislead.

The eloquence of large language models together with their huge volume of training allows them to produce text which is often good enough to simulate a real understanding of the subject. However, real understanding would actually be far more than what is possible for chatbots. For example, it would be fully based on detailed trusted knowledge, combined with reliable logic. Some artificial intelligence having all that would also be able to reliably recognize the limits of its competence and could thus indicate those – instead of spitting out rubbish, which may have been found somewhere or even invented.

The Future

Being aware of the complications of predicting the future, I want to share some thoughts.

Further Refinements of AI

The progress of artificial intelligence in the form of large language models has been enormous, and recently became widely known through more or less open-access chatbots (and some other tools such as AI-based image generators). It is tempting to extrapolate that progress over several coming years, expecting that currently encountered obstacles will then be largely overcome. However, I am quite skeptical because of some fundamental challenges:

  • The problem of garbage in – garbage out is one of those. LLMs require a huge amount of training texts, and providing those while guaranteeing high quality appears to be impossible. Only if LLMs could be made to work with much less input material, and if that (still large amount of) material would be curated by a huge team of high-level experts, garbage could be largely eliminated. Even if that is done one day, it will presumably be too expensive for offering basically free services.
  • My colleague Gareth Moore raised the objection that while learning a language requires a huge amount of text, fine tuning of a model to get the facts right can be done with much less material. The question is just whether with that approach it really becomes practical to provide enough high-quality fine tuning inputs to achieve reliable answers. I guess that may work for a limited area of expertise, but it is presumably very hard to cover a large field such as laser technology or photonics as a whole.
  • By the way, I think that fairness would definitely require to pay not only curators directly working for chatbot operators, but also those providing high-quality content for training. Using hard-working authors' high-quality materials without asking and without any compensation for producing outputs not even referencing those isn't something I would qualify as good behavior.
  • As explained above, real understanding fundamentally requires far more than statistical information on word sequences. It cannot be replaced with simply still more statistical data. We need fundamentally new principles. Maybe those will some day be invented and worked out, but that would be a new thing, not just a continuation of LLM research.

Therefore, I expect that the already excellent language processing capabilities will be improved further, and successfully applied to more and more cases, while true intelligence will take substantially more efforts and time to be realized. I dare to predict, for example, that we will all need trusted content from human authors (for example, texts like my RP Photonics Encyclopedia) for many years to come, and can use LLMs for assistance but not for fundamental fact finding or even for creating new knowledge. In other words, the term artificial intelligence will for the foreseeable future deserve double quotes. I believe, however, that it is possible in a real sense, and will eventually be realized, although this is extremely hard. After all, there are already quite intelligent “machines” – our brains! –, and I don't see a fundamental reason why semiconductor technology should not be suitable for similar capabilities, or in fact even much stronger ones. And I find Nick Bostrom's warnings concerning a possible “intelligence explosion”, rapidly leading to a superintelligence, quite plausible. But he himself doesn't dare to predict when this will happen, and what we experienced so far seems to be far from that explosion, despite some remarkable achievements.

Meanwhile, let us use AI for what it is good at – for example, transforming texts in various ways.

By the way, AI is of course not all about language processing, and there are in fact important applications of various kinds of machine learning in science and technology. With this article, however, I wanted to focus on AI bots giving answers on photonics questions.

(Credits to my colleague Gareth Moore, who provided various useful comments!)


This article is a posting of the Photonics Spotlight, authored by Dr. Rüdiger Paschotta. You may link to this page and cite it, because its location is permanent. See also the RP Photonics Encyclopedia.

Note that you can also receive the articles in the form of a newsletter or with an RSS feed.

Questions and Comments from Users

Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria. Essentially, the issue must be of sufficiently broad interest.

Please do not enter personal data here; we would otherwise delete it soon. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him, e.g. via e-mail.

Spam check:

By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those inputs.) As your inputs are first reviewed by the author, they may be published with some delay.

preview

Connect and share this with your network:

Follow our specific LinkedIn pages for more insights and updates:

Code for Links on Other Websites

If you want to place a link to this article in some other resource (e.g. your website, social media, a discussion forum, Wikipedia), you can get the required code here.

HTML link on this article:

<a href="https://www.rp-photonics.com/spotlight_2023_09_06.html">
Article on Can AI bots like Chat-GPT and Bard Answer Photonics Questions?</a>
in the <a href="https://www.rp-photonics.com/encyclopedia.html">
RP Photonics Encyclopedia</a>

With preview image (see the box just above):

<a href="https://www.rp-photonics.com/spotlight_2023_09_06.html">
<img src="https://www.rp-photonics.com/previews/spotlight_2023_09_06.png"
alt="article" style="width:400px"></a>