Distributed decision-making over networks involves multiple agents collaborating to achieve a common goal. In the social learning process, where agents aim at inferring an unknown state from a stream of local observations, the probability of error in their decisions converges to zero exponentially in the asymptotic regime. The rate of this convergence, known as the error exponent, is influenced by the combination policy employed by the network. This work addresses the challenge of identifying the optimal combination policies to maximize the error exponent. We establish an upper bound on the achievable error exponents by the social learning rule and provide the conditions for the combination policy to reach this upper bound. By implementing the optimized policy, we enhance the error exponent, leading to improved accuracy and efficiency in the distributed decision-making process.