**: The content was compiled from SemiEngineering by Semiconductor Industry Watch (ID: icbank), thank you.
Pawel Malinowski, Project Manager at IMEC, sat down with Semiconductor Engineering (SE) to discuss the changes in sensor technology and why. The following is an excerpt from that discussion.
SE: What's next for sensor technology?
Malinowski: We're trying to find a new way to make image sensors because we want to get rid of the limitations of silicon photodiodes. Silicon is a perfect material, especially if you want to reproduce human vision, as it is sensitive to visible wavelengths, which means you can do what the human eye does. Now this field is at a very mature stage. Approximately 6 billion image sensors are sold every year. These chips will eventually be found in the cameras of smartphones, cars, and other applications. They are typical standard image sensors that have silicon-based circuitry or electronics and silicon photodiodes. They basically do red, green, and blue (RGB) reproductions so that we can get a nice **. But if you look at other wavelengths – for example, ultraviolet or infrared – you will find phenomena or information that are not available in visible light. We pay special attention to the infrared range. There, we talked about a specific range, between one micron and two microns, which we call shortwave infrared. With this range, you can see through things. For example, you can see through fog, smoke, or clouds. This is especially interesting for automotive applications.
SE: Are there any upcoming challenges or new applications for this technology?
Malinowski: For this wavelength, you can't use silicon because it's going to become transparent. For example, when you look at cracks in silicon solar cells, this is interesting for defect inspection. Some materials have different contrasts. Materials that look exactly the same in the visible range may have different reflectances in short-wave infrared, which means you can have better contrast, for example, when you're sorting plastics or sorting food. There are other applications, as shown in Figure 1 (bottom). This is the power of light from the sun passing through the atmosphere. The gray is above the atmosphere, and the blank is coming to the earth. You'll see that there are some maximum and minimum values. The minimum value is related to the rate of water absorption in the atmosphere. You can use this minimum when working, such as with an active cancellation system, which means you emit some light and check the light that is reflected back. That's how Face ID works on your iPhone: you emit light and check what is returned. They operate at a wavelength of about 940 nanometers. If you use a longer wavelength (e.g. 1,400), your background will be much lower, which means you can get better contrast. If you choose a wavelength that still has a lot of light, you can use it with passive illumination to get additional information, such as low-light imaging where there are still some photons.
Figure 1: Possibilities for short-wavelength infrared. **imec
SE: How did you determine this?
Malinowski: What we're looking at is how these wavelengths are accessed. Due to its physical properties, silicon is not advantageous for this. The traditional method is bonding, where you take another material, such as indium gallium arsenide or mercury cadmium telluride, and bond it to the readout circuit. This is the existing art. It is widely used in defense applications, military, and high-end industry or science. It's expensive. Sensors manufactured with this technology often cost thousands of euros due to the bonding process and manufacturing costs. You can grow the material you need, such as germanium, but this is very difficult and there are some problems in making the noise low enough. We use the third method, which is to store the material. In this case, we use organic materials or quantum dots. We take materials that can absorb short-wave infrared or near-infrared light and deposit them with standard methods such as spin coating, and we get very thin layers. That's why we call these sensors "thin-film photodetector sensors", where the material is much more absorbent than silicon. It looks like a pancake on top of the readout circuit.
SE: How does this compare to other materials?
Malinowski: If you compare it to silicon diodes, they require more volume and more depth. Especially for these longer wavelengths, they become transparent. In contrast, a thin-film photodetector (TFPD) image sensor has a monolithically integrated bunch of materials, including photosensitive materials such as quantum dot organic materials, which means it is a chip. There is no bonding on the top of the silicon. The problem with this approach is that when you integrate a photodiode like this on top of a metal electrode, it's hard to reduce the noise low enough because there are some inherent sources of noise that can't be eliminated.
Figure 2: Thin-film photodetectors. **imec
SE: How did you solve this problem?
Malinowski: We followed the way silicon image sensors developed in the late '80s and '90s with the introduction of fixed photodiodes. You can decouple the photodiode region of the converted photons from the reading. We introduced an additional transistor instead of just one thin film absorber in contact with the readout. This is the TFT, which is responsible for completely depleting the structure so that we can transfer all the charges generated in this thin film absorber and transfer them to the readout device through this transistor structure. In this way, we significantly limit the sources of noise.
SE: Why is noise an issue in sensor design?
Malinowski: Noise is different. Noise can be the total number of unwanted electrons, but these electrons can come from different ** or for different reasons. Some are related to temperature, some are related to the inhomogeneity of the chip, some are related to transistor leakage, and so on. With this approach, we are looking at some of the noise sources associated with the readings. For all image sensors, there will be noise, but the way to deal with it is different. For example, the silicon-based sensors in iPhones handle noise sources with specific readout circuit designs, and the foundations of their architecture date back to the 80s and 90s. This is a little bit that we are trying to replicate with a new image sensor with a thin-field photodetector. This is the application of old design techniques to a new type of sensor.
SE: Do you expect this to be used in **??You mentioned cars. Does it also work with medical devices?
Malinowski: The biggest impetus for this technology comes from consumer electronics, such as smartphones. If you use a longer wavelength, you can get a lower contrast because there is less light at that wavelength, or you can see light of that color in the atmosphere. It's enhanced vision, which means you can see more than the human eye can see, so your camera can get more information. Another reason is that longer wavelengths make it easier to pass through certain displays. The promise is that if you have this kind of solution, you can place the sensor (e.g. Face ID) behind another display, which increases the display area.
Figure 3: Enhanced field of view for improved safety. **imec
Another reason is that if you use a longer wavelength, your eyes will be much less sensitive – about five to six orders of magnitude compared to near-infrared wavelengths, which means you can use a more powerful light source. So you can shoot more power, which means you can have a longer range. For cars, you get extra visibility, especially in bad weather conditions, such as in fog. For healthcare, it can help advance miniaturization. In some applications, such as endoscopy, prior art uses other materials and more complex integrations, so miniaturization is quite difficult. With the quantum dot method, you can make very small pixels, which means higher resolution in a compact form factor. This makes it possible to achieve further miniaturization while maintaining high resolution. In addition, depending on our target wavelength, we can achieve a very high water contrast, which may be one of the reasons why the food industry is interested. You can better detect moisture in grain products such as cereals.
Figure 4: Potential Applications **imec
SE: With the improvement of low-light vision, can it have military applications?
Malinowski: Such sensors are already used by the military, for example, to detect laser rangefinders. The difference is that the military is willing to spend 20,000 euros on a camera. In the automotive or consumer sector, they don't even think about this technology, and it is for this reason.
SE: So the breakthrough here is that you can have something that already exists, but you can have it at consumer-scale pricing?
Malinowski: Exactly. Because of the miniaturization and how monolithic integration enables you to upgrade your technology, you can get consumer-scale quantities and**.
SE: What other trends do you see in sensor technology?
Malinowski: One of the current discussion points is precisely this – beyond visible imaging. The existing technology is already excellent for taking pictures. The new trend is that sensors are more application-specific. The output doesn't need to be pretty**. It can be specific information. With Face ID, the output can actually be either 1 or 0. The phone is either unlocked or unlocked. You don't need to see your face**. There are also interesting modes that emerge, such as polarization imagers, just like polarization glasses. With some reflection, they see better. There are event-based imagers, which only observe changes in the scene – for example, if you study the vibrations of a machine or count the number of people passing by the store. If you have an autopilot system, you need a warning that tells you that an obstacle is about to appear and that you should brake. You don't need a nice **. This trend means more fragmentation as it is more application-specific. It changes the way people design image sensors because they look at what is good enough for a particular application, rather than optimizing image quality. Image quality is always important, but sometimes you need something simple to get the job done.
SE: Is it important to know if it's a person or a tree?Or is it enough to know that you need brakes now?
Malinowski: In the automotive industry, there is still debate. Some people want to classify all objects. They want to know if it's a child, a cyclist, or a tree. Someone said, "I just need to know if it's in the way because I need to trigger the brakes." So there is no one answer.
The science and technology of the great powers are in