High-spec network switches may offer two types of physical interface connectors. The first one is the standard RJ45 that we also refer to as "copper" and just recently got upgraded to support speeds over 1Gbps. With the correct switch, cable and Ethernet card, you can achieve what the industry calls multi-gig speeds (2.5Gbps, 5Gbps, and 10Gbps) over Cat6/Cat7 cables with some limitations on cable length.
Traditionally though, switches offered interfaces with >=1Gbps speed through the use of modular cages (holes). There is quite a number of types for these ports, but the industry has standardized into the use of two cage form-factors: 1) the SFP/SFP+ one, and 2) the QSFP one. The cages REQUIRE the use of an appropriate module to have a cable connected. The reason behind the introduction of such modularity is that it allows the customer to select appropriate modules according to speed requirements, types of copper of fiber optic cables used, length of cable etc.
SFP cages support up to 1Gbps speeds.
SFP+ cages support up to 10Gbps speeds, at the time of writing.
QSFP cages also support 25Gbps, 40 Gbps, and 100Gbps speeds, at the time of writing.
As the cost to purchase modular SFP+ switches, with two modules per connection (one at the switch, and one at the server) was big, custom copper cables have been introduced, which we call DACs (Direct Attach Copper cables) or SFP+ Twinax, at lengths of up to 5 meters for passive DACs and 10 meters for active DACs. These are especially well suited for Datacenter deployments.
In your case, the cheapest solution is to get DACs to connect your cards to the SFP+ cages of your switch. With that switch you can have a maximum of 4 SFP+ twinax cables.
My experience has shown me that, especially when using good-brand NICs, such as those by Intel, there is no compatibility issue between the vendors of the switch, the DAC and the NIC. I am using Mellanox DACs on QLogic 10Gbps adapters on a Mikrotik 10G switch and I have no problems.