You have to NAT through opnsense, or set up different routing tables on the VM.
Client is 192.168.1.4.
Server is 192.168.1.5 and 192.168.2.5.
Opnsense is dealing with vlan 1 and vlan 2 (for simplicity sake) according to 192.168.VLAN.0/24, and will happily forward packets between the 2 subnets.
As the VM has 2 network devices, 1 on VLAN1 and 1 on VLAN2, it alway has a direct connection to the client via VLAN1.
So, if your client connects to 192.168.2.5, it doesn't know where to send the packet.
It sends it to the gateway (opnsense), which then forwards it to vlan2.
The VM then receives the packet, and replies to its address (192.168.1.5, opnsense doesnt alter the sender's address).
The way Linux works is it will use the network device that is in the same subnet - as opposed to replying on the same device the packet arrived on.
So the VM send the packet out VLAN1 directly back to the client.
And this works. Packets from client to server go via opnsense, packets from server to client go directly.
For a while.
Then opnsense sees that there is an ongoing connection between vlan1 and vlan2... except it's not seeing all the proper acks/syn/wait/fin packets. So it thinks it's a timed out connection, or something dodgy going on... and it closes the connection.
And now your client can't talk to the server through vlan2, and it has to reconnect.
I pulled my hair out over this.
I ended up just having a single NIC per VM.
Here's a SE question that might help you.
https://unix.stackexchange.com/questions/4420/reply-on-same-interface-as-incoming