Neurons in the human amygdala and hippocampus are classically thought to encode a person’s identity invariant to visual features. However, it remains largely unknown how visual information from higher visual cortical areas is translated into such a semantic representation of an individual person. Here, across four experiments (3,581 neurons from 19 neurosurgical patients over 111 sessions), we demonstrate a region-based feature code for faces, where neurons encode faces on the basis of shared visual features rather than associations of known concepts, contrary to prevailing views. Feature neurons encode groups of faces regardless of their identity, broad semantic categories or familiarity; and the coding regions (that is, receptive fields) predict feature neurons’ response to new face stimuli. Together, our results reveal a new class of neurons that bridge perception-driven representation of facial features with mnemonic semantic representations, which may form the basis for declarative memory.