Published on

网络同步方式

Authors
  • avatar
    Name
    Ushen
    Twitter

最近在阅读有关网络同步的知识,从大部分人口中来讲游戏同步分为帧同步(帧锁定同步)和状态同步。 (说到这两个术语看到有一些争论,但是我就不纠结具体到底应该叫什么,只要理解正确就行了, 所以也先用帧同步和状态同步这两个来讲,毕竟总要让我有个词说出来)

但是部分博客讲的让我非常不理解。例如

帧同步就是游戏逻辑在客户端,状态同步就是游戏逻辑在服务器。

帧同步就是转发指令,状态同步发状态。

帧同步依靠客户端计算很难防止外挂

。。。。。。。。。。。。。。。

    This is **not** a heading, it’s an indented code block

不理解的地方在于,游戏逻辑在哪执行,发送的数据是什么形式,有无权威服务器,是区分的依据吗。

为了搞清楚,我想去看一下具体实现的代码

被称为FPS鼻祖的DOOM,同步方式是怎么样的,

他以一个tic为单位,维护两个数组,一个nettics和一个netcmds,分别代表着每个玩家的tic和cmd, 每一次NetUpdate会尝试接收消息去更新这两个数组,可以看到,只有等到每一个玩家的tic跟上游戏当前的tic时, 才会推进游戏,否则会一直卡在循环里,超出一定时间后会弹出菜单。

这里去掉了一些其他代码,复制了一部分出来

	//
	// NETWORKING
	//
	// gametic is the tic about to (or currently being) run
	// maketic is the tick that hasn't had control made for it yet
	// nettics[] has the maketics for all players 
	//
	// a gametic cannot be run until nettics[] > gametic for all players
	//
	int nettics[MAXNETNODES];

	void TryRunTics (void)
	{
		int		i;
		int		lowtic;
		int		entertic;
		static int	oldentertics;
		int		realtics;
		int		availabletics;
		int		counts;
		int		numplaying;
		
		// get real tics		
		entertic = I_GetTime ()/ticdup;
		realtics = entertic - oldentertics;
		oldentertics = entertic;
		
		// get available tics
		NetUpdate ();
		
		lowtic = MAXINT;
		numplaying = 0;
		for (i=0 ; i<doomcom->numnodes ; i++)
		{
		if (nodeingame[i])
		{
			numplaying++;
			if (nettics[i] < lowtic)
			lowtic = nettics[i];
		}
		}
		availabletics = lowtic - gametic/ticdup;
		
		// decide how many tics to run
		if (realtics < availabletics-1)
		counts = realtics+1;
		else if (realtics < availabletics)
		counts = realtics;
		else
		counts = availabletics;
		
		if (counts < 1)
		counts = 1;
			
		frameon++;

		if (debugfile)
		fprintf (debugfile,
			"=======real: %i  avail: %i  game: %i\n",
			realtics, availabletics,counts);

		if (!demoplayback)
		{	
		// ideally nettics[0] should be 1 - 3 tics above lowtic
		// if we are consistantly slower, speed up time
		for (i=0 ; i<MAXPLAYERS ; i++)
			if (playeringame[i])
			break;
		if (consoleplayer == i)
		{
			// the key player does not adapt
		}
		else
		{
			if (nettics[0] <= nettics[nodeforplayer[i]])
			{
			gametime--;
			// printf ("-");
			}
			frameskip[frameon&3] = (oldnettics > nettics[nodeforplayer[i]]);
			oldnettics = nettics[0];
			if (frameskip[0] && frameskip[1] && frameskip[2] && frameskip[3])
			{
			skiptics = 1;
			// printf ("+");
			}
		}
		}// demoplayback
		
		// wait for new tics if needed
		while (lowtic < gametic/ticdup + counts)	
		{
		NetUpdate ();   
		lowtic = MAXINT;
		
		for (i=0 ; i<doomcom->numnodes ; i++)
			if (nodeingame[i] && nettics[i] < lowtic)
			lowtic = nettics[i];
		
		if (lowtic < gametic/ticdup)
			I_Error ("TryRunTics: lowtic < gametic");
					
		// don't stay in here forever -- give the menu a chance to work
		if (I_GetTime ()/ticdup - entertic >= 20)
		{
			M_Ticker ();
			return;
		} 
		}
		
		// run the count * ticdup dics
		while (counts--)
		{
		for (i=0 ; i<ticdup ; i++)
		{
			if (gametic/ticdup > lowtic)
			I_Error ("gametic>lowtic");
			if (advancedemo)
			D_DoAdvanceDemo ();
			M_Ticker ();
			G_Ticker ();
			gametic++;
			
			// modify command for duplicated tics
			if (i != ticdup-1)
			{
			ticcmd_t	*cmd;
			int			buf;
			int			j;
					
			buf = (gametic/ticdup)%BACKUPTICS; 
			for (j=0 ; j<MAXPLAYERS ; j++)
			{
				cmd = &netcmds[j][buf];
				cmd->chatchar = 0;
				if (cmd->buttons & BT_SPECIAL)
				cmd->buttons = 0;
			}
			}
		}
		NetUpdate ();	// check for new console commands
		}
	}

可以看到,这种同步方式是通过不断收集玩家指令,等到所有玩家指令到达之后,将游戏进度推进到下一个tic, 一个tic的长度是不固定的,网络好的玩家会被网络差的玩家所影响,看起来体验不是很好。 这种方式也被称之为"帧同步",或者说lockstep。

算法的核心应该是锁定步使得动作一致。

再看看DOOM3是如何做的,不断自旋去获取包,当自旋次数达到一定值时,一定会将游戏推进到下一帧, 这里用gameTimeResidual判断,每次获取包之后增加时间,select会设置timeout,GetPacketBlocking不会被一直阻塞。

这个应该被称为乐观锁Bucket Synchronization,相当于把前面一个非固定的一个时间改成一个固定的, 只要没收到玩家操作默认无操作。

	// spin in place processing incoming packets until enough time lapsed to run a new game frame
	do {

		do {

			// blocking read with game time residual timeout
			newPacket = serverPort.GetPacketBlocking( from, msgBuf, size, sizeof( msgBuf ), USERCMD_MSEC - gameTimeResidual - 1 );
			if ( newPacket ) {
				msg.Init( msgBuf, sizeof( msgBuf ) );
				msg.SetSize( size );
				msg.BeginReading();
				if ( ProcessMessage( from, msg ) ) {
					return;	// return because rcon was used
				}
			}

			msec = UpdateTime( 100 );
			gameTimeResidual += msec;

		} while( newPacket );

	} while( gameTimeResidual < USERCMD_MSEC );

在DOOM之后,id software发布新作Quake(经典游戏半条命,CS反恐精英使用的引擎),Quake的同步方式被称为快照同步, 状态同步的前身(快照发送整个世界的快照,而状态将数据拆分更细,发送状态增量,不必要的时候不发送), 这里也只看网络同步部分代码

//
// main loop
//
oldtime = Sys_DoubleTime () - 0.1;
while (1)
{
// select on the net socket and stdin
// the only reason we have a timeout at all is so that if the last
// connected client times out, the message would not otherwise
// be printed until the next event.
	FD_ZERO(&fdset);
	if (do_stdin)
		FD_SET(0, &fdset);
	FD_SET(net_socket, &fdset);
	timeout.tv_sec = 1;
	timeout.tv_usec = 0;
	if (select (net_socket+1, &fdset, NULL, NULL, &timeout) == -1)
		continue;
	stdin_ready = FD_ISSET(0, &fdset);

// find time passed since last cycle
	newtime = Sys_DoubleTime ();
	time = newtime - oldtime;
	oldtime = newtime;
	
	SV_Frame (time);		
	
// extrasleep is just a way to generate a fucked up connection on purpose
	if (sys_extrasleep.value)
		usleep (sys_extrasleep.value);
}	

这里可以看到,首先是会通过select来阻塞进程,直到收到socket事件,然后会调用SV_Frame

/*
==================
SV_Frame

==================
*/
void SV_Frame (float time)
{
	static double	start, end;
	
	start = Sys_DoubleTime ();
	svs.stats.idle += start - end;
	
// keep the random time dependent
	rand ();

// decide the simulation time
	if (!sv.paused) {
		realtime += time;
		sv.time += time;
	}

// check timeouts
	SV_CheckTimeouts ();

// toggle the log buffer if full
	SV_CheckLog ();

// move autonomous things around if enough time has passed
	if (!sv.paused)
		SV_Physics ();

// get packets
	SV_ReadPackets ();

// check for commands typed to the host
	SV_GetConsoleCommands ();
	
// process console commands
	Cbuf_Execute ();

	SV_CheckVars ();

// send messages back to the clients that had packets read this frame
	SV_SendClientMessages ();

// send a heartbeat to the master if needed
	Master_Heartbeat ();

// collect timing statistics
	end = Sys_DoubleTime ();
	svs.stats.active += end-start;
	if (++svs.stats.count == STATFRAMES)
	{
		svs.stats.latched_active = svs.stats.active;
		svs.stats.latched_idle = svs.stats.idle;
		svs.stats.latched_packets = svs.stats.packets;
		svs.stats.active = 0;
		svs.stats.idle = 0;
		svs.stats.packets = 0;
		svs.stats.count = 0;
	}
}
/*
=================
SV_ReadPackets
=================
*/
void SV_ReadPackets (void)
{
	int			i;
	client_t	*c;
	qboolean	good;
	int			qport;

	good = false;
	while (NET_GetPacket ())
	{
		if (SV_FilterPacket ())
		{
			SV_SendBan ();	// tell them we aren't listening...
			continue;
		}

		// check for connectionless packet (0xffffffff) first
		if (*(int *)net_message.data == -1)
		{
			SV_ConnectionlessPacket ();
			continue;
		}
		
		// read the qport out of the message so we can fix up
		// stupid address translating routers
		MSG_BeginReading ();
		MSG_ReadLong ();		// sequence number
		MSG_ReadLong ();		// sequence number
		qport = MSG_ReadShort () & 0xffff;

		// check for packets from connected clients
		for (i=0, cl=svs.clients ; i<MAX_CLIENTS ; i++,cl++)
		{
			if (cl->state == cs_free)
				continue;
			if (!NET_CompareBaseAdr (net_from, cl->netchan.remote_address))
				continue;
			if (cl->netchan.qport != qport)
				continue;
			if (cl->netchan.remote_address.port != net_from.port)
			{
				Con_DPrintf ("SV_ReadPackets: fixing up a translated port\n");
				cl->netchan.remote_address.port = net_from.port;
			}
			if (Netchan_Process(&cl->netchan))
			{	// this is a valid, sequenced packet, so process it
				svs.stats.packets++;
				good = true;
				cl->send_message = true;	// reply at end of frame
				if (cl->state != cs_zombie)
					SV_ExecuteClientMessage (cl);
			}
			break;
		}
		
		if (i != MAX_CLIENTS)
			continue;
	
		// packet is not from a known client
		//	Con_Printf ("%s:sequenced packet without connection\n"
		// ,NET_AdrToString(net_from));
	}
}
/*
=======================
SV_SendClientMessages
=======================
*/
void SV_SendClientMessages (void)
{
	int			i, j;
	client_t	*c;

// update frags, names, etc
	SV_UpdateToReliableMessages ();

// build individual updates
	for (i=0, c = svs.clients ; i<MAX_CLIENTS ; i++, c++)
	{
		if (!c->state)
			continue;

		if (c->drop) {
			SV_DropClient(c);
			c->drop = false;
			continue;
		}

		// check to see if we have a backbuf to stick in the reliable
		if (c->num_backbuf) {
			// will it fit?
			if (c->netchan.message.cursize + c->backbuf_size[0] <
				c->netchan.message.maxsize) {

				Con_DPrintf("%s: backbuf %d bytes\n",
					c->name, c->backbuf_size[0]);

				// it'll fit
				SZ_Write(&c->netchan.message, c->backbuf_data[0],
					c->backbuf_size[0]);
				
				//move along, move along
				for (j = 1; j < c->num_backbuf; j++) {
					memcpy(c->backbuf_data[j - 1], c->backbuf_data[j],
						c->backbuf_size[j]);
					c->backbuf_size[j - 1] = c->backbuf_size[j];
				}

				c->num_backbuf--;
				if (c->num_backbuf) {
					memset(&c->backbuf, 0, sizeof(c->backbuf));
					c->backbuf.data = c->backbuf_data[c->num_backbuf - 1];
					c->backbuf.cursize = c->backbuf_size[c->num_backbuf - 1];
					c->backbuf.maxsize = sizeof(c->backbuf_data[c->num_backbuf - 1]);
				}
			}
		}

		// if the reliable message overflowed,
		// drop the client
		if (c->netchan.message.overflowed)
		{
			SZ_Clear (&c->netchan.message);
			SZ_Clear (&c->datagram);
			SV_BroadcastPrintf (PRINT_HIGH, "%s overflowed\n", c->name);
			Con_Printf ("WARNING: reliable overflow for %s\n",c->name);
			SV_DropClient (c);
			c->send_message = true;
			c->netchan.cleartime = 0;	// don't choke this message
		}

		// only send messages if the client has sent one
		// and the bandwidth is not choked
		if (!c->send_message)
			continue;
		c->send_message = false;	// try putting this after choke?
		if (!sv.paused && !Netchan_CanPacket (&c->netchan))
		{
			c->chokecount++;
			continue;		// bandwidth choke
		}

		if (c->state == cs_spawned)
			SV_SendClientDatagram (c);
		else
			Netchan_Transmit (&c->netchan, 0, NULL);	// just update reliable
			
	}
}

SV_CheckTimeouts, SV_ReadPackets, SV_SendClientMessages,

首先超过一定时间没有收到客户端回复的会把链接删除,然后读取客户端消息,并执行,然后发送消息给客户端, 收到消息时会标记客户端send_message = true并执行指令, 然后在发送消息时,只会发送send_message = true的玩家。 这里是将世界状态推送出去,客户端直接渲染。

再看看Quake3,这一次服务器接收用事件处理,维护一个数组eventQue,新加了一个spin,走的太快的时候希望自旋一下,自旋的时候尝试接收新事件。

// we may want to spin here if things are going too fast
	if ( !com_dedicated->integer && com_maxfps->integer > 0 && !com_timedemo->integer ) {
		minMsec = 1000 / com_maxfps->integer;
	} else {
		minMsec = 1;
	}
	do {
		com_frameTime = Com_EventLoop();
		if ( lastTime > com_frameTime ) {
			lastTime = com_frameTime;		// possible on first frame
		}
		msec = com_frameTime - lastTime;
	} while ( msec < minMsec );
	Cbuf_Execute ();

那这几种方式都有什么区别,我觉得发送的数据是什么形式,游戏逻辑在哪执行并不是区分依据。 如云风所说

和 lockstep 对应的同步方式是什么呢?我认为核心点在于要不要 lock

DOOM算法核心在于锁定,等待所有玩家指令后同步,在等到所有指令之前锁住,后续优化对等待时间加入了一个超时机制,超时没收到一个玩家的指令就认为他在待机。

假设我们把转发指令改成服务器运行指令,转发状态,得到的结果是一样的,算法核心没有改变,依然是lockstep。既然转发状态了,那客户端不运行游戏逻辑也是可以的。

那Quake有什么不一样,等待socket信息,得到指令后直接运行,后续优化加了事件列表和自旋,那这里有什么区别,没有在等到所有玩家指令前锁住,但是后面加入的spin(自旋)和DOOM后面加的很像。

DOOM在spin时是为了等待其他玩家的指令,然后设置超时,Quake在spin时是为了缓冲指令。可以非常明显看到区别就是,第一个在提前收集到指令后会直接进入下一个tick,而第二个则是会等待到自旋时间结束然后再执行指令并发送。


总的来说,我觉得两种同步方式并不是以发送数据形式和游戏逻辑执行位置来区分的,甚至两种同步方式是可以相结合的。 传输指令比传输状态更加省带宽,像RTT这类游戏,玩家一次命令就有可能操作大量的单位,大量单位的频繁变化,如果发送状态确实会消耗不小的流量,传输指令的话就需要客户端运行逻辑,不过这也不影响,因为服务器也可以运行一套逻辑,以服务器的结果为准,这样即使客户端作弊也改变不了结果。除了实时竞技游戏,像MMORPG这种应该大多都是只把发生变化的状态发给客户端就行了。

当然客户端还可以用类似打开战争迷雾这种外挂,原因是因为如果单纯传输指令,服务器必须把所有玩家的指令发送到客户端,这样客户端很容易就掌控全图的信息,如果指令和状态都发的话,应该就能避免这个问题,服务器利用AOI算法,筛选出应该把哪些玩家的指令发送给哪些玩家,那当迷雾内的其他单位离开了迷雾则推送状态,实现起来应该会麻烦一些。