This lecture covers the concepts of sufficiency and ancillarity in sampling theory. It explains how statistics can provide information about parameters and how some statistics are more informative than others. The Fisher-Neyman factorization theorem is presented as a key tool to determine the sufficiency of statistics. Examples with normal and exponential distributions illustrate how sufficient statistics compress data without losing information. The lecture concludes with the definition of minimally sufficient statistics and their properties, emphasizing the importance of these statistics in parameter estimation.