The provisioning TCP/IP stack is used to isolate traffic for cold migration, VM clones, and snapshots, and to assign a dedicated default gateway, routing table, and DNS configuration for this traffic.
In your environment, there would still be a benefit to using the Provisioning Stack, but the benefit would really come down to being able to address traffic management/differentiation in a more formal way, allowing you to have more granular control. This may or may not be a desired outcome for you.
For instance, you may want to be able to VLAN this traffic, or implement Jumbo Frame support along the traffic pathway to optimize this traffic flow or secure it differently than other traffic. You may want to pass this traffic through an IDS/IPS solution, or some other in-line monitoring capability and want to be able to identify this traffic specifically based on whatever reasons you are addressing.
There are many scenarios where dedicated VMKs can offer benefits to you, as well as the additional benefit of an optimized traffic management stack, as noted below:
Route the traffic for migration of powered on or powered off virtual machines by using a default gateway that is different from the gateway assigned to the default stack on the host.
By using a separate default gateway, you can use DHCP for IP address assignment to the VMkernel adapters for migration in a flexible way.
Assign a separate set of buffers and sockets.
Avoid routing table conflicts that might otherwise appear when many features are using a common TCP/IP stack.
Isolate traffic to improve security.
To address the other part of your question, which was the use of a multi-nic capability, the short answer is YES, it is supported.
Remember that the way we leverage multiple Physical Adapters in the host is by having multiple uplinks/dvUplinks on our switches (standard or distributed) and then binding multiple physical adapters (vmnics) to the switch via uplinks. We then use a combination of a load balancing policy, failback, network failure notification, notify switches and teaming to sculpt the exact behavior we want. By using multiple vmnics bound to a dedicated standard switch that uses the Provisioning Stack, you can get the best results. If you do this, the results are that after you create a VMkernel adapter on the provisioning TCP/IP stack, you can use only this stack for cold migration, cloning, and snapshots on this host. The VMkernel adapters on the default TCP/IP stack are disabled for the provisioning service. If a live migration uses the default TCP/IP stack while you configure VMkernel adapters with the provisioning TCP/IP stack, the data transfer completes successfully. However, the involved VMkernel adapters on the default TCP/IP stack are disabled for future cold migration, cross-host cloning, and snapshot sessions.
I hope that helps to get it all sorted out for you. If you have any additional questions, just let me know.
Good Luck !!