amd64 backend TODO: Correctness: - compound return calling convention - Implement more builtins (libgcc lacks several of them that gcc provides natively on amd64 so cparser/libfirm when linking to the compilerlib fallback) - Builtins not implemented: parity, popcount - Thread local storage not implemented - x87: Implement unsigned -> x87 and x87 -> unsigned conversions. - x87: Adapt fix spill with full float-stack case to amd64 (see panic in x86_x87.c). Improve Quality: - Perform IncSP/IncSP peephole - Immediate32 matching could be better and match SymConst, Add(SymConst, Const) combinations where possible. - Cmp allows Immediate and Address mode at the same time - Support Read-Modify-Store operations (aka destination address mode) - Leave out labels that are not jumped at (improves assembly readability, see ia32 backend output) - Align certain labels if beneficial (see ia32 backend, compare with clang/gcc) - Implement CMov/Set and announce this in mux_allowed callback - We always Spill/Reload 64bit, we should improve the spiller to allow smaller spills where possible. - Support folding reloads into nodes (amd64_irn_ops: possible_memory_operand() perform_memory_operand()) - Report instruction costs (amd64_irn_ops: get_op_estimated_cost()) - Transform IncSP+Store/Load to Push/Pop peephole pass - Use stack red zone where possible to avoid IncSP at begin/end of function - Compare node inputs can be swapped if we remember this in the compare node attributes, this allows us to think of them as associative operations and for example swap inputs to enable load folding, or immediates. - Match RCPxx SSE instruction - x87: Support source address modes. - Address mode support for Div/IDiv - Perform some benchmark comparison with clang/gcc and distill more issues to put on this list.