With the application process for the CHIPS and Science Act now open, one question is how the money will be spread over the four categories that the government intends to subsidize. “Leading-edge logic” factories (“fabs”) have attracted the most attention because the U.S. is no longer at the front of the pack in the most advanced semiconductor manufacturing. But “current generation and mature nodes” is interesting for a different reason – these fabs are used to make a wide variety of digital devices that are embedded in everyday products as well as systems critical to national defense. Among these chips are the optical image sensors that are used in digital cameras, phones, cars, doorbells, and a rapidly growing number of other applications.
I recently had the opportunity to visit SIONYX, a company spun out of Harvard in 2006 that makes ultra-low-light image sensor chips for use in military applications, as well as for use in a range of interesting consumer products. Even though SIONYX is a primarily an outsourced product provider, meaning someone else manufactures chips and assembles products offshore for them to their specifications, a deeper dive tells us a lot about why it might be important to start making these chips closer to home.
Digital image sensors convert light into electrical signals. There are two main types: charge-coupled devices (CCDs) and CMOS image sensors. CCDs were invented at Bell Labs in 1969, and they were popular in early digital cameras. CMOS image sensors have largely supplanted them because of the ability to manufacture them using processes more compatible with high volume chip manufacturing, as well as the ability to integrate transistors on the same device for processing the signals and converting them into digital outputs. They usually are designed with rows of photodiodes arranged in a rectangular pixel array that capture the light, coupled with amplifiers and peripheral circuitry that convert the electrical signal into a value that a camera can read out. Thus an iPhone 14 main camera chip with 12 million pixel resolution has that number of photodiodes on its sensor chip plus all the read-out circuitry.
The key to producing a high-quality image is the design of the pixels, and here is where you start running into tradeoffs. How much light you capture depends on how big the area of the sensor is. The bigger the sensor the more light you can capture. A larger sensor allows you to capture more light, but that also means you need bigger lenses and a larger optical path, which both costs more and makes for a larger physical end-product.
The backup camera on your car might, using a larger sensor chip, grow from the size of a quarter to the size of a coffee mug, which both would cost more and bring packaging challenges. On the other hand, the more pixels you can put on an individual sensor, the higher the resolution you get. But when you have more pixels in the same area, each pixel has to be smaller and therefore receives proportionately less light.
Performance innovations in sensors can occur in both the design of the pixel array as well as the design of the peripheral circuitry. We have seen a lot of improvements over the last decade, for example, higher pixel resolutions, improvements in low-light sensitivity, wide dynamic range scenes, and ever-faster frame rates and ultra slow-motion modes.
People have come up with a number of ways of improving the peripheral circuitry. One nifty trick was to move the wiring that connects the pixels to the peripheral circuitry to the back side of the chip from where the light comes in so that it doesn’t block incoming light. These backside illuminated (BSI) sensors highlighted how clever design can improve low-light performance.
The other area for innovation is in the design of the pixels. Here’s where SIONYX comes in. The company uses several tricks to help its pixels not only absorb more of the light, but to have better sensitivity over a broader range of the color spectrum.
The first thing it does is it uses what it calls a “black silicon” layer that makes each pixel absorb every bit of available light more efficiently, and then it uses what is called deep trench isolation, which embeds a more effective barrier between adjacent pixels, makes individual pixels better at trapping light and decreasing noise interference between them. It also uses tricks like changing the thickness of some layers to improve light capture.
SIONYX embeds these sensors in some pretty interesting products. Its night vision cameras are in the U.S. Army’s Integrated Visual Augmentation System (IVAS) project, and it is also selling to law enforcement and for night time search and rescue. On the consumer side, it is selling into outdoor applications like hunting and nighttime navigation for boating.
Here’s where the separation of design from manufacturing gets interesting. For peripheral circuitry innovations chip designers can work independently and just send their completed designs to the fab for manufacturing. The designers simply have to follow a set of design rules (embedded in what is called a process design kit, or PDK) that are given to them by the fab. As long as the design complies with this set of rules they do not need to interact with the fab until the design is complete and they send it off for manufacturing.
In the early days this design transfer process involved submitting a computer tape with the design (hence the term “tape-out” to signify shipping the finished design off to manufacturing).
In contrast, innovations at the pixel level — and in particular process innovations such as the ones noted earlier — require close collaboration between the designer and the manufacturing fab. Chip manufacturers want to maintain very high yield and reliability while supporting numerous types of chips, so they create well-defined recipes: process modules with parameter limits for each step in the process flow. Any innovation that requires a modification to this delicately balanced set of recipes cannot simply be done by the chip designer independently from the manufacturer. This has become most apparent when producing bleeding edge chips like the application processors in phones or CPU chips for computers.
For many years semiconductor fabs could advance from one generation to the next mostly by just shrinking transistors and making other process tweaks. But over the last few generations they had to start practicing design-technology-co-optimization (DTCO) in which process R&D teams work closely with design teams every day to explore new architectures and approaches. DTCO for advanced chips, and the SIONYX example highlight the increasingly close linkage between manufacturing and innovation, if you can’t manufacture, you can’t innovate.
Today the world’s most advanced image sensor fabs – both foundries and captive facilities – are located overseas, predominantly in Asia. This creates a challenge for domestic US designers of image sensors who innovate at the pixel/process level. For innovations that are of national and economic security interest to the U.S., it would definitely be better for that innovation to happen in a domestically-located fab, even if that fab were owned and operated by a foreign company. Given how pervasively image sensors are used in both defense and commercial applications, especially the rapid growth we will see in the automotive sector, one could make a strong case for establishing domestic image sensor manufacturing capabilities. I imagine that the Department of Defense would agree.
Disclosure: I serve on the Industrial Advisory Committee of the U.S. Department of Commerce. The views expressed here are my own, and do not represent the positions of that committee or the Department of Commerce.
Source: https://www.forbes.com/sites/willyshih/2023/04/11/sionyx-why-making-semiconductor-image-sensors-in-the-us-matters/