The performance assessment of multi-objective heuristic algorithms is one of the most significant contributions from the evolutionary optimization algorithms community. By contrast, performance assessment in the context of many-objective optimization is still a challenging, open research field. Recent advances have demonstrated disagreements between Pareto-compliant performance metrics, and indicated that reference fronts produced by benchmark generators of Pareto-optimal fronts could be further improved. In this work, we investigate these reference fronts with the help of multi-dimensional visualization techniques and Pareto-monotonic archivers. Interestingly, reference fronts produced by benchmark generators for DTLZ and WFG continuous optimization problems show significant issues, even when only three objectives are considered. Furthermore, given that input solution sets for five-objective problems are not high-quality, archivers are unable to output reasonable approximation fronts. We conclude that the performance assessment of EMO algorithms needs to urgently address reference front generation.