In a chemical analysis, the internal standard method involves adding the same amount of a chemical substance to each sample and calibration solution. The internal standard responds proportionally to changes in the analyte and provides a similar, but not identical, measurement signal. It must also be absent from the sample matrix to ensure there is no other source of the internal standard present. Taking the ratio of analyte signal to internal standard signal and plotting it against the analyte concentrations in the calibration solutions will result in a calibration curve. The calibration curve can then be used to calculate the analyte concentration in an unknown sample.
Selecting an appropriate internal standard accounts for random and systematic sources of uncertainty that arise during sample preparation or instrument fluctuation. This is because the ratio of analyte relative to the amount of internal standard is independent of these variations. If the measured value of the analyte is erroneously shifted above or below the actual value, the internal standard measurements should shift in the same direction.
The earliest recorded use of the internal standard method dates back to Gouy's flame spectroscopy work in 1877, where he used an internal standard to determine if the excitation in his flame was consistent. His experimental procedure was later reintroduced in the 1940s, when recording flame photometers became readily available. The use of internal standards continued to grow, being applied to a wide range of analytical techniques including nuclear magnetic resonance (NMR) spectroscopy, chromatography, and inductively coupled plasma spectroscopy.
In NMR spectroscopy, e.g. of the nuclei 1H, 13C and 29Si, frequencies depend on the magnetic field, which is not the same across all experiments. Therefore, frequencies are reported as relative differences to tetramethylsilane (TMS), an internal standard that George Tiers proposed in 1958 and that the International Union of Pure and Applied Chemistry has since endorsed.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The Standard addition method, often used in analytical chemistry, quantifies the analyte present in an unknown. This method is useful for analyzing complex samples where a matrix effect interferes with the analyte signal. In comparison to the calibration curve method, the standard addition method has the advantage of the matrices of the unknown and standards being nearly identical. This minimizes the potential bias arising from the matrix effect when determining the concentration.
In a chemical analysis, the internal standard method involves adding the same amount of a chemical substance to each sample and calibration solution. The internal standard responds proportionally to changes in the analyte and provides a similar, but not identical, measurement signal. It must also be absent from the sample matrix to ensure there is no other source of the internal standard present. Taking the ratio of analyte signal to internal standard signal and plotting it against the analyte concentrations in the calibration solutions will result in a calibration curve.
Isotopes are distinct nuclear species (or nuclides, as technical term) of the same element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but differ in nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have almost the same chemical properties, they have different atomic masses and physical properties.
The goal is to provide students with a complete overview of the principles and key applications of modern mass spectrometry and meet the current practical demand of EPFL researchers to improve structu