// indent
Tetsuya Hoshino
Link
What are we living for?

--Scientific Approach to Value Inquiry--

Introduction

Research on brain

It is very difficult to determine what research should be done in the field of science and technology. The research itself is truly meaningful and worth spending one's life on, as it is expected to be useful even a thousand years from now.
  In both science, which seeks the truth, and engineering, which aims at technological applications to society, it will be important to clarify the increasingly complex science and technology in relation to space, society, and living things. It is not difficult to imagine that this consideration alone is enough to become a major discipline. For example, there is the discipline of philosophy of science.
  Among living organisms, the relationship with people is a particularly important factor in the value of research. If we consider the brain to be the center of human thought, it is the brain that mediates between people and society. The study of people, especially the brain, occupies an important place in academia, coupled with the difficulty of elucidating the brain.

1. Value system suggested by science

Fig.1 Schrödinger's cat

1.1 Quantum Theory

Science has had major implications for value systems, such as the uncertainty principle of quantum theory and the theory of evolution based on genes. The uncertainty principle suggested that the world is not mechanistic and predestined, but full of unpredictable possibilities.
   In the uncertainty principle, there is an indeterminacy in the product of energy and time, so that both cannot be measured exactly. A well-known example of explaining the relationship between the uncertainty principle and macroscopic phenomena to the general public is what is known as "Schrödinger's cat". If quantum processes determine whether the cat in the box is dead or alive, we will not know until we open the box. On the macroscopic level, such things are impossible [1], but in the microscopic realm, they happen frequently.
   The other is what is known as the butterfly effect in chaos theory[2]. Even a butterfly flapping its wings can have an effect on the weather in the long run. The butterfly effect also suggests that the world is unpredictable and uncertain.
   Penrose believed that quantum effects govern the roots of brain functioning [3]. Does this also imply unpredictable psychological phenomena? I believe that in principle it can be explained by classical logic. The reason is that even in nuclear spins, a memory that is easily amenable to quantum computation, it does not work theoretically well enough to constitute logic [4]. In order to perform quantum computation, each spin must be highly independent and not subject to external perturbations. In liquids, nuclear spins are highly independent and quantum calculations can be performed using them [1, 5], but in solids, it is difficult. In the brain, where the interaction is in the solid, an effect called decoherence, which breaks the independence of nuclear spins, has been found to be significant [6]. It would be difficult to construct dense logic circuits for quantum computation in the brain.

Fig.2 Genes and Evolution

1.2 Gene Theory

One foundation for the scientific value system was provided by a series of studies that led to Darwin's theory of evolution (conservation of species)[7], Nagely's discovery of chromosomes, Mendel's laws of heredity, and Watson and Crick's discovery of the double helix of genes [8]. It suggests that human potential is bound by genes.
One of the consequences brought to social thinking from the concept of genes is the selfish gene theory that genes manipulate humans [9].

2. The value of brain exploration (the importance of the brain as a vessel of value)

Fig. 3 Computer and its memory

2.1 Value system

Friedrich Nietzsche's nihilism and Buddhism share a common ideology in that they both take a step back from absolute value systems. Nietzsche denied that "absolutely this is right" and "absolutely beautiful" [10]. Buddhism, on the other hand, has a vast amount of contemplation that continues long after its founder, the Buddha. Some of that contemplation is summarized in the Heart Sutra[11,12,13]. The Heart Sutra relativizes all arbitrary human ideas and values as if it were looking at the earth from the distant universe.
   If all value is relative, as nihilism and the Heart Sutra suggest, then the brain has the status of a source of value creation beyond its existence as a vessel. Elucidating the function of the brain is itself a search for the source of value criteria. Although we can describe the relationship between brain function and human behavior in terms of anger, joy, grief and pleasure, it is difficult to show what anger, joy, grief and pleasure are when viewed at the cellular level. In order to elucidate brain function in detail, we need technology to measure the shape of cells and their changes, and this technology is advancing rapidly.

2.2 Brain and Value system

Whereas quantum theory and genetic theory are explanations of the principles by which values are formed, the brain directly influences value-based behavior. It can be said that it processes information from the outside and governs the behavioral principles that move various parts of the body. These are the five senses of sensitivity, appetite, and libido. These correspond to the firmware and system files needed to start up a computer, as well as the operating system (OS) that acts as a go-between for programs and hardware. On the other hand, memory is an important factor that shapes a person's personality. Based on memory, behavioral principles are exerted. Memory can be divided into temporary memory and long-term memory, which can be applied to Dynamic Random Access Memory (DRAM), a type of memory that is a temporary storage device, and hard disks and optical storage media, which are memories suitable for long-term data storage, respectively. DRAM is a type of memory that is suitable for temporary storage, while hard disks and optical storage media are suitable for long-term data storage.
  Computers are made up of various IC chips, including DRAMs. IC chips are the integrated circuits inside computers. As of 2022, IC chips will contain tens of billions of transistors, the basic unit [14].
  It is software that determines how a computer works. Without software, it would not work. If we know what kind of software is in a computer, we can know what it is for and how it works. Software is complex, but the basic principle consists of software that gives commands for addition and subtraction, and the corresponding hardware. It is unlikely that the brain has such simple movements, but we can expect research to begin with knowing this correspondence.

3. The key to deciphering the brain

If we consider that the brain has a significant impact on value, we can see that brain research is an important research subject with the potential to connect the humanities and science. Yet, despite the vast amount of research on how the brain works, we know very little about what is important. In a computer, the unit of storage is the memory cell (storage element) made of transistors, and information is stored in binary digits of 01. This is the most basic unit that makes up a computer. On the other hand, it is unknown how the information corresponding to 01 is stored in the brain. Therefore, it is only more vaguely known how the stored information is processed [15, 16].
   The most obvious brain phenomenon related to memory is the grandmother cell. When a monkey sees a grandmother's face, one specific cell responds strongly. It is as if one cell corresponds to one memory [17]. The number of neurons that make up the human brain is between 50 and 100 billion [18, 19], 10 billion of which belong to the cerebral cortex [20, 21]. For a top-level supercomputer, the number of transistors per IC chip is 10 to the 9th power [22, 23], and the number of chips per computer is 10 to the 5th power [23]. The number of transistors is then 100,000 billion (14 powers of 10). This is an order of magnitude more than the number of neurons. However, it is not capable of doing as many different things as a human being, and it does not seem to be superior to the brain. The idea that one cell corresponds to one memory does not explain this advantage. One possibility is that there is a more complex, cell-to-cell network. For example, if the distance between cells or the shape of the cells is a source of information, the amount of information would increase.
   In the grandmother cell model, and if there is a Fourier transform-like transformation from the spacing between neurons to a hologram, then not only the strong signal but also the weak signal that accompanies it is important. Given this, the measurement method to read this weak signal is important. The pulse signal of the neuron and the shape of the neuron might be parameters, and the signal might be modified by the shape of the neuron. Considerations would be the question of whether the key to information is intensity or signal spacing, and how the signal branches.

4. Model of brain memory and thinking

As for the phenomenology of the brain, there is a vast amount of research on the transmission pathways of neural information and where memory locations are located. However, the memory mechanism is not understood. It is still being debated whether the mechanism is quantum, an indeterminate phenomenon, or classical, like a mechanical clock [3].
   Models also vary, and the perspectives that each model emphasizes are different. Gabor's theory that associative memory can be explained by holographic memory [24] focuses on the human brain, which is good at associations. Another theory is that the mind is influenced not only by the brain but also by internal organs [25]. Furthermore, there are theories that there are cerebellar and cerebral chips as functional structures that control activation other than neuronal networks, and that they are the root of memory [26, 27]. Another direction is to try to explain the function of the brain by using an electrical model of computation. For example, this is an attempt to explain the learning process and clarify the mechanism by which humans make judgments through the exchange of signals in various parts of the brain [28, 29].

5. Memory traces: Engrams

The brain is roughly divided into the cerebrum and cerebellum. Among the brain organs, the hippocampus is believed to be involved in memory and is located below the cerebrum. Nearby is the thalamus, which serves as a transmission pathway for visual information to the cerebrum. Shape information is Fourier transformed and sorted into triangles, squares, and etc. [30, 31, 32]. In this way, there have been many studies on the pathways and general storage locations of memory information [33].
   On the other hand, we do not know where and how image memories are stored and how they are recalled [34]. What is required is to find a comprehensive theoretical framework, a set of bridging laws that explain how mental events are related to brain activity patterns in a redundant way [35]. It is also not clear where the memory traces(Engram) that form the premise for this study are left behind. For example, a single cell strongly responds to specific image information, such as an grandmother's face. However, it is not known how the information is represented and in what form it is stored at the cellular level. And the question is, what should be revealed to provide this clue? This leads to the real issues: what to measure?
   Despite the vast amount of research that has been done, the situation of not being able to locate traces of memory suggests the need to incorporate new research methods.

6. Importance of brain measurement

We have discussed the possibility that the brain, which is merely a vessel, contributes as a source of value, but the elucidation of the brain has not yet reached the point where it can be linked to value. Understanding the brain requires a variety of knowledge. The brain's growth process, the mechanisms of memory and signal transduction, and models of the brain based on these mechanisms are some examples. The presence or absence of connections with other organs is also important. Principles of human behavior have been studied in psychology, but ultimately, an integrated understanding of the brain down to its microscopic functions will enable discussions that link it to value[36].
   What is important is how meaning and value are linked to memory and signal processing in neural information transfer. The question is how the abstract concepts of meaning and value are represented at the signal level. For example, the emergence of different responses in a constant repetitive process may indicate some significance. Or it may be important that the response changes with specific input patterns.
  
   Consider the case where the external signals to the five senses are always the same. For example, if a person is confined to a room with no windows and no objects, but only walls, and the person's eyes are closed, he or she can think and create. An adult who has already formed many memories can think and create something even without signals from the outside. If meaning and value can be created only inside the brain, the content of signal output would change spontaneously, even if external signals are steady.
  The same input, in a time series, should show different responses. It may be due to reconnection or modification of neurons, or it may be due to the reproducibility of the signal. How memories are stored and values are formed will be an important key to linking the brain and society.
   Psychology has examined many principles of human behavior. We have examined the characteristics of the black box called the brain based on various case studies. In the future, a better understanding will be achieved by mapping psychological events to more physical brain-based evidence. For example, experiments using genetically engineered mice with altered brains have revealed the relationship between brain function and specific genes [37]. In addition, linking value to the brain would reveal how specific memories are considered desirable and linked to behavior.
  Measurement technology has progressed rapidly, and many brain models have been proposed, but they have not yet reached a level that contributes to the elucidation of the brain.For example, the speed and resolution of measurements are considered insufficient. The brain model is still in the process of being debated about the principle of whether it is quantum or not.
   Microscopic structures have been measured for brain regions, such as the cerebrum, cerebellum, frontal lobe, temporal lobe, and other segments, and the general role of each region is becoming clearer. Measurement using electrodes as measurement terminals and fluorescence as a marker for specific areas has revealed the functions of neurons, the basic unit of the brain, at the molecular level.
   Thus, much effort has been devoted to the study of the brain. However, among the various aspects of knowledge, the mechanisms of the brain seem to be the least understood. Blu-ray optical memory stores a single bit as a unit. At each point, a 1 or 0 is determined by the high or low reflectance. We don't even know what the corresponding memory traces are in the brain. In other words, the principle of memory is not clearly understood. For example, when we memorize the face of our grandmother, a computer breaks it down into 1s and 0s and stores them in IC circuits, but what kind of information is stored in the brain is unknown. Magnetic Resonance Imaging (MRI) has provided a vast amount of information. Furthermore, there is a need to improve measurement techniques in many aspects, such as spatial and temporal resolution and sensitivity, depth of penetration, elemental differentiation, and detection of changes in molecular structure. In particular, we are focusing on neurons, the basic unit of the brain. Points of interest are the branches of signals to neighboring neurons and the modulation of the intensity of each signal within a short period of time. If this can be measured with high speed, high sensitivity, high precision, nondestructive, and non-contact, it is highly likely that new information can be obtained.




Fig.4 Image of a grandmother cell neuron and a pigeon [16]

Memory traces are said to be slight changes in neuron shape [38]. But can a single cell memorize such complex faces?
   There is a concept called holograms. In hologram memory, information is recorded on a surface, and by changing the angle of incidence, for example, multiple pieces of information can be read. In light, the interference is based on the fact that the wavefront is maintained. If brain signals are maintained in the myelin sheath, with their coherence, it is not surprising that similar interference effects occur. Vision, where the input information is clear, would be the easiest to analyze. The visual pathway has been well studied, but it is not clear how it is linked to memory [39].
   The hologram is delocalized with an expanded memory area, but there is also a localized memory model. When a wavefront is scattered by an image and a strong scattering point that serves as reference, the original image can be recovered by Fourier transforming the scattering pattern.
 In this measurement of light scattering, the contrast of the scattering pattern is highest when the focal point is just in front of the scatterer. If there are two scatterers, the pattern is determined by the distance between the two scatterers. By using the distance between the scatterers as information, a memory model can be constructed. If this model can be applied to neuronal memory, we can think of the grandmother cell as the central pillar with the highest scattering intensity among the multiple scatterers, and the contrast will be highest when a wave of electrical signals in phase with the grandmother cell comes to the region containing the grandmother cell and the cell corresponding to her information.
   This is to say that the information is formed from the distance between that cell and other cells, starting from that cell. If this is the case, then only strong signals are being observed, and the true information would be contained in the weak signals that flowed to other cells.

7. Measurement of neurons


The need for weak signal measurement could be inferred from the grandmother cells. If faint signals affect memory, it is necessary to measure the signal of each neuron with high sensitivity. Since the signals are spike series that line up at intervals on the order of ms, it is important to achieve both high speed and high resolution. Recent high-sensitivity high-speed cameras have achieved measurement speeds of nearly 1 μs per frame with high sensitivity and dynamic range[40].

  Note: For example, Photron's FASTCAM Mini AX high-speed camera is a high-speed camera with a shooting speed of 20,000 frames per second at 640 x 480 pixels. In addition to the high resolution of 12-bit monochrome images, the camera has achieved an ultra-high sensitivity of monochrome ISO 40,000.



The brain is a neural network of neurons arranged in a lattice-like pattern. Most previous studies have been limited to monosynaptic transmission at best [41]. In reality, even monosynaptic transmission has not been accurately analyzed. The mainstream method for analyzing monosynaptic transmission has been to insert an electrode into a neuron and measure the change in potential. However, in this method, the electrode may affect the signal. Optical measurement is a promising method for non-contact measurement. A high-speed, high-resolution method for 3D measurement by holography or by two-photon absorption is under investigation[42].
   In addition to holography and two-photon absorption, there are other methods for optically measuring signals from multiple neurons that use scatterometry [43].
   Holograms are superior in that they allow easy control of spatial light collection and detection position. Two-photon absorption is unique in that it can measure deep into the brain. On the other hand, scatterometry is superior in terms of resolution and measurement speed.

8. Measurement method


The method of measuring signals in the brain depends on the object to be measured. The brain can be divided into three categories: the top surface, a few millimeters deep, and the entire brain. In addition, resolution and measurement speed are important factors. Measurement methods can be classified as non-contact, contact, or destructive.
   Techniques that can measure the entire brain are X-ray CT and MRI. These techniques can measure the structure of the entire brain noninvasively. MRI, in particular, can capture not only structure but also molecular motility. Its in-plane resolution can be 14 μm. In addition, the diameter of nerve bundles in specific areas of the brain, such as the corpus callosum, can be determined, as well as their orientation. For example, one can distinguish between those at 2-4 µm and those at 10 µm [44].

   In the case of brain surfaces, for example, Atomic Force Microscope (AFM) and Scanning Ion Conduction Microscope (SICM) can measure the three-dimensional shape of a cell by tracing its surface with a probe with nanometer precision [45]. SICM uses the change in ion current caused by the change in distance between the neuron and the probe to image the surface shape and charge without touching the sample [46]. The measurement speed is slow, about 25 μm^2/min, but the resolution in nondestructive measurement is the highest compared to other methods [47].

   The method of measuring neuronal potentials with electrodes has achieved the most results for brain information transmission, although it is a contact method. One such method, the patch-clamp method, records the behavior of ions through ion channels and transporters. Neurotransmitter transporters terminate their neurotransmission by reuptake of neurotransmitters released from nerve endings [48].

  An important point to evaluate the brain's function as a value-based instrument would be how memories are stored. As a result of interaction with the external world, decisions based on memory will determine a person's behavior.
   Until now, electrode-based techniques have been used. This is because it is possible to measure the inside of the brain and to measure the activity of specific neurons. Although it is a promising method, it has not yet been able to capture memory traces in the brain. One reason for this may be that the measurement itself affects the nerves because it is a contact method. It is not suitable for capturing the minute electrical signals of nerves.
   The typical method for non-contact measurement of this electrical signal is optical measurement.

9. Optical Measurement

9.1 Optical Measurement of Microstructures

It is a common desire of many people to accurately view the shape and color of minute specimens with an optical microscope. In reality, there are only a limited number of cases in which this wish can be fulfilled, as size and refractive index have a complex effect on visibility. Recent advances in techniques for calculating light propagation are providing clues to solving this problem.
   Well-known examples of shape and size influencing color are the rainbow of soap bubbles and the wings of morpho butterflies. Usually, color is greatly influenced by the imaginary part of the complex refractive index. However, some structures can produce colors that are quite different from those assumed from the complex refractive index.
   Shape and size, on the other hand, are seen differently depending on the complex refractive index. It is well known that information on the refractive index is essential in the measurement of film thickness. Knowing the shape and complex refractive index of an unknown micro object is nothing more than estimating these parameters from the measured shape, size, and color information.
   The color is affected by the structure in a wavelength size region where the size is said to be the resonance domain. If the sample is a particle, the particle size in the resonance domain is about 1 to 10 times the wavelength. In the linear approximation, the light bends or travels straight at the sample surface and the light flux is maintained. However, for these particles, the light flux spreads at the interface, and this spread varies with wavelength. Therefore, reflectance and scattering patterns are different at different wavelengths [49, 50, 51].

9.2 Classification of Optical Measurement and Scatterometry

The principle of using light to see small objects (the principle of microscopy) can be divided mathematically into three broad categories based on the degree of approximation. One is the method of ray tracing using the fact that light travels in a straight line. This is a linear approximation. The light travels straight through the sample and the image of the sample is projected as it is. In X-ray CT, which is widely used in medicine, the original three-dimensional image is reconstructed from this projected image. The other method is to use lens imaging. The principle of optics is based on the Fourier transform action of the lens [52,53]. And the Fourier transform is possible only when the Fraunhofer approximation or a similar approximation, the first-order Born approximation, is used. The Fraunhofer approximation is valid when the measurement is made sufficiently far away compared to the sample size [54]. Therefore, it cannot take into account multiple reflections and multiple scattering within the sample, which requires consideration of the propagation of light at short distances. holography, one of the main methods for measuring 3D shapes, is based on the same approximation and considers that light reflected at the sample surface is free to propagate. The third is a method that estimates the image from the scattering pattern, strictly taking into account the thickness and refractive index of the sample. It is called scatterometry. The process of obtaining the original image is trial and error, and is difficult to apply unless the geometry is simple. However, if the analysis is successful, the size of the object to be measured can be reduced by two orders of magnitude compared to the other two principles, and the accuracy can be improved by two orders of magnitude.
   Scatterometry is an effective method for high-resolution and high-speed 3D measurement. The main conventional high-resolution 3D measurement methods are lens imaging, holography, and phase reproduction. Holography and phase reproduction methods are suitable for obtaining 3D images at high speed, but they assume the use of the Fourier transform, which makes the assumption of the Fraunhofer approximation. Therefore, when scattering and absorption are large, these methods are limited to about 100 wavelengths even when the size of the measurement target is small, and the accuracy is only about 10 wavelengths [55]. Scatterometry is a rigorous numerical evaluation of Maxwell's equations, which are the principles of optics . The performance is similar when absorption and scattering are large. Therefore, it improves accuracy by two orders of magnitude in 3D shape measurement when scattering and absorption are large compared to analysis using the Fourier transform [56].
   Another advantage of scatterometry is its high measurement speed. Scatterometry is able to perform high-resolution 3D measurements in a single shot over a range of several to several tens of wavelengths. It is faster than super-resolution measurements which are based on fluorescence decay time, or two-photon absorption measurements which are based on single-point measurements.
   Scatterometry is especially useful for visible and ultraviolet light and for very short ultraviolet and soft x-rays, where the differences in refractive index and extinction coefficient between medium and sample are often large and susceptible to scattering. At sample sizes of about 10 wavelengths, the Fraunhofer approximation, the basic principle of lens imaging and holography, is not valid. When the sample is thin, the diffraction and scattering effects are small and can be analyzed with this approximation, but when the sample is thicker, it becomes more difficult [57].
   The disadvantage of scatterometry is that it is difficult to analyze and the original image is not uniquely determined. On the other hand, linear and Fraunhofer approximations provide a one-to-one correspondence between the original image and the scattering pattern. Mathematical solutions are reliably obtained and their speed is fast.
   In scatterometry, finding the original image from the scattered pattern is called solving an inverse problem. Deep learning or least-squares methods are used to solve inverse problems. Currently, they are limited to simple shapes that are columnar and have a rectangular, triangular, or elliptical cross section of the column.
   Scatterometry is powerful when wavelength is limited, as in visible and UV measurements, or when sample destruction at short wavelengths is a problem, as in the measurement of organic materials with X-rays. Using scatterometry, the wavelength can be lengthened by a factor of 100 for the same resolution. Taking this effect into account, the resolution is improved by a factor of over 10 if the sample damage is kept at the same level.

   Optical measurement is non-destructive, non-contact, and fast. In general, the higher the energy (shorter wavelength), the higher the resolution, including electron beams. On the other hand, the higher the energy, the more likely the sample will be destroyed by the measured light [58]. This is especially problematic for organic materials such as living organisms.
   As a result of this trade-off, the limit at which both resolution and sample destruction can be achieved has conventionally been around 10 nm wavelength.

9.3 Coherent Diffraction and Its Limitations

Light scattering from lasers can be considered as perfectly coherent light and can be described mathematically in a concise manner. On the other hand, white light, such as from halogen lamps, requires treatment that takes into account the degree of incoherence, depending on the sample size and wavelength. The reason why coherent light is suitable for measuring the size of objects is that the properties of partially coherent light are indeterminate in the degree of incoherence, and the calculations are complicated according to the degree of incoherence.
   In principle, coherent light scattering can accurately predict the scattering pattern by optical calculation methods. Although the properties of micro samples vary widely in terms of real and imaginary parts of the refractive index, shape, vertical length, horizontal length, etc., a large amount of experimental data can be measured by changing the angle of incidence and wavelength, and measuring the scattering angle distribution of transmission and reflection, as well as reflectance and transmittance, so that rough information can be obtained.
   When the sample is actually measured, we are faced with the problem of coherent light becoming incoherent inside the sample. This problem occurs even when the scatterers inside are small and have little effect on the scattering pattern. Exact light scattering calculations assume coherent light, which leads to large errors.
   If the internal refractive index distribution of an unknown sample were accurately known, it could be calculated, but the scattering intensity of micro-scatterers is small and their number is innumerable, making it difficult to measure the distribution. In this case, coherency can be used as an indicator. It is assumed that coherency decreases uniformly within the sample. This is a large approximation, but the agreement with experiment is better than if nothing is taken into account at all [59].
   Scatterometry has the advantage of not making any approximations, but instead has the disadvantage that the original shape cannot be obtained directly from the scattering pattern using an inverse Fourier transform or the like. This technique can be used only when the original shape is simple or when the approximate shape is known. This method is suitable for analysis when the approximate shape is known and only one cross-sectional shape is needed, as in the case of neurons.

9.4 High-speed, high-resolution 3D optical measurement of nerves

The principle of using light to see small objects (the principle of microscopy) can be divided, as mentioned above, into 1. linear approximation, 2. Fraunhofer approximation, and 3. exact solution method (scatterometry). The second method is often used in conjunction with fluorescence in the measurement of nerve signals. Emission from a fluorophore has led to a significant improvement in spatial resolution by capturing only the emission of a single molecule in a time-resolved manner: stochastic optical reconstruction microscopy (STORM). This method could not improve temporal resolution unless the frequency of luminescence was high, and the intensity of the excitation light, or the focusing intensity that the sample could withstand, limited both the speed and resolution of the measurement.
   The third, scatterometry, not only does not require fluorescent labeling, but also has the potential to significantly improve temporal resolution.

9.5 Versatile optical measuring device by scatterometry (high accuracy microscope)

The actual sample is often very different from the image that can be represented by the Fraunhofer approximation. In visible light and soft X-rays, where the effects of absorption and scattering are significant, this discrepancy is noticeable due to enhanced absorption and multiple scattering within the sample. In such cases, scatterometry can improve resolution by about two orders of magnitude for 3D measurements of simple geometries compared to lens imaging. In addition, it is possible to measure with a time resolution of less than a millisecond. This is a nondestructive, non-contact, and simple measurement method. High-resolution and fast detection of changes in the refractive index distribution pertaining to neuronal signal transduction is possible [60,61]. Recent advances in optical computation techniques described above have made it possible to quantitatively analyze shape and size.
   The refractive index distribution can be revealed if the relationship between the refractive index distribution in the cross section and the scattering pattern in the far field can be accurately calculated. As a method for calculating the angular distribution of the scattered light, we will use Rigorous Coupled-Wave Analysis (RCWA). The reason for this is that the internal structure can be easily taken into account and the scattering pattern of the far field can be calculated directly. Although RCWA deals only with periodic structures, it can also be applied to isolated systems by devising the calculation method [62]. In addition, selecting a specific structure requires focusing the light on the measurement site, and the setting of the positional relationship between the focus point and the sample has a significant impact on reproducibility, but the contrast of the scattering pattern can be adjusted to ensure reproducibility [63].
   Combining the second and third methods allows both real-time imaging and high-resolution measurements. There are examples of visible light scatterometry of diffraction patterns of neurons in Golgi-stained cerebrum specimens from cats and Weigert-stained cerebellum specimens from humans [43].

9.6 Use of optical nonlinearity and time difference measurement

Measurements that take advantage of nonlinearities in optical absorption and scattering, in two-photon absorption and Raman scattering, are increasingly being used. These techniques can increase spatial resolution compared to measurements that do not use nonlinearity.
  The diffraction limit defines the resolution, and these are methods to obtain resolution beyond this diffraction limit. The image formation of these methods has been studied within the framework of Fourier optics. However, the resolution can be increased for each measurement point by utilizing nonlinearity. For example, the process of two-photon absorption by a phosphor is limited to areas where the light intensity is strong. Using this, it is possible to measure only a narrower area than the light-focusing area of a lens for one-photon absorption. In two-photon absorption, the wavelength of the incident light is twice the measurement wavelength. This makes it possible to select a wavelength with low absorption and scattering in living organisms and to measure deep into the sample [64]. The problem is that acquiring 3D shapes requires the acquisition of a large number of points, which tends to result in poor temporal resolution.
   Another technique is nonlinear Raman scattering [65]. Nonlinear Raman is superior to linear Raman in both temporal and spatial resolution, and by selecting the wavelength, it is possible to measure deep into cellular tissue. [66, 67]. The dielectric constant varies with the intensity of the electric field. If we can observe only where this intensity change is large, we may be able to observe a portion of the incident light that is smaller than the focal diameter of the incident light [65]. Raman scattering generally has low sensitivity and takes a long measurement time. To solve this problem, a method has been developed to amplify the signal intensity by resonating the energy difference between two wavelengths of incident light and the vibrational mode of Raman of the sample molecule.
   The other method is to use time resolution, as in STORM. The emission of single photon fluorescence has a diffraction-limited spread that defines the resolution of the lens. However, if its center can be accurately evaluated, substantially higher resolution can be achieved.
   The problem with methods that use nonlinearity and time difference measurements is that it is difficult to increase the time resolution. The stronger the laser intensity, the better the time resolution, but the more fragile the sample.

Reference

  • Reference List
  • Revision history

    • 2023.9.9     Version 1  2nd
    • 2023.7.19     SSL certificate obtained   1st