设为首页收藏本站官方微博

【汉化资料】屏幕监控编程

[复制链接]
查看: 2344|回复: 0
打印 上一主题 下一主题

【汉化资料】屏幕监控编程

跳转到指定楼层
楼主
发表于 2009-5-28 00:02 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式

【汉化资料】屏幕监控编程

原文! w+ N* b1 J8 k! ^1 E" ~# M" i' A; J5 R
http://www.dklkt.cn/article.asp?id=114" m6 e. N, E. Q9 R- X! W
0 j  Z5 M+ s/ m3 [2 j
最近花了点时间研究这个。写写收获吧。
4 `$ W. W" L/ J3 x
0 o: H- D5 f) M) Y7 R, \3 d首先要说的是,要实现屏幕的捕捉,目前有以下几种方法。- x5 S, K- q! E) R9 B# P( a, k

2 g2 v6 j0 |  O1 }( K; q* Q1。使用GDI函数或者Windows Media API函数0 S1 \3 ^1 b; {2 f# Y: V. H
2。使用DirectX技术- V3 E2 A9 B4 F3 T/ m3 I4 I
3。使用api hook技术7 }( _' w0 @: B$ e3 S
4。使用图形驱动技术
  b* D8 ~( k( q# I2 X( Y6 V- t. E* @. I
关于实现,这里有一篇老外的文章,来自codeproject,可以看一下。
3 Z/ B- }6 d; G! k2 q# g( g" j1 q引用
! s# {. H$ Q! V: f' i* n, k) BVarious methods for capturing the screen
8 F. Z$ V4 B8 xBy Gopalakrishna Palem- M) V6 r/ U! L/ K7 m
1 E& @6 h, S  T
Contents) y+ G- z, T  p! R1 w
Introduction
* N% Z: S* ]/ D9 O1 t3 e' g1 nCapture it the GDI way
9 e: [/ r9 r7 K0 Y% hAnd the DirectX way of doing it
/ Z4 E' C, m5 K4 WCapturing the screen with Windows Media API
1 i2 S% d1 g. V; H* g) `Introduction
) }  v. n/ u# y$ f/ N6 l5 j) m, n1 F+ ^
Some times, we want to capture the contents of the entire screen programmatically. The following explains how it can be done. Typically, the immediate options we have, among others, are using GDI and/or DirectX. Another option that is worth considering is Windows Media API. Here, we would consider each of them and see how they can be used for our purpose. In each of these approaches, once we get the screenshot into our application defined memory or bitmap, we can use it in generating a movie. Refer to the article Create Movie From HBitmap for more details about creating movies from bitmap sequences programmatically.* ^. @: z5 U+ r- m) i
Capture it the GDI way
3 k+ ^9 I! x8 ~# f4 q6 L' k/ @
When performance is not an issue and when all that we want is just a snapshot of the desktop, we can consider the GDI option. This mechanism is based on the simple principle that the desktop is also a window - that is it has a window Handle (HWND) and a device context (DC). If we can get the device context of the desktop to be captured, we can just blit those contents to our application defined device context in the normal way. And getting the device context of the desktop is pretty straightforward if we know its window handle - which can be achieved through the function GetDesktopWindow(). Thus, the steps involved are:8 k/ D9 T% A( D) ]8 _1 V
Acquire the Desktop window handle using the function GetDesktopWindow(); 9 r7 J: L% W, ]* v* s* ]$ p4 Q
Get the DC of the desktop window using the function GetDC();
% v( r, ?% ?& D" n# q3 {% _Create a compatible DC for the Desktop DC and a compatible bitmap to select into that compatible DC. These can be done using CreateCompatibleDC() and CreateCompatibleBitmap(); selecting the bitmap into our DC can be done with SelectObject(); 5 R! ?/ G6 _5 ]
Whenever you are ready to capture the screen, just blit the contents of the Desktop DC into the created compatible DC - that's all - you are done. The compatible bitmap we created now contains the contents of the screen at the moment of the capture.
+ q9 M7 {& b% n* ~Do not forget to release the objects when you are done. Memory is precious (for the other applications). 7 S1 w7 _! l& {1 f9 b$ Y' j- L
Example
9 j" ?* c8 \+ [# ^" y3 YVoid CaptureScreen()
: _- B) D$ u! ^& x7 |8 K+ F4 L{
+ d  R" `8 y! I' u9 Q7 H/ r    int nScreenWidth = GetSystemMetrics(SM_CXSCREEN);
3 T7 s! D4 i* j" C  F: n    int nScreenHeight = GetSystemMetrics(SM_CYSCREEN);" A& ~1 u6 |- _% Z/ R
    HWND hDesktopWnd = GetDesktopWindow();
9 G1 F) w1 d' L" `4 _) z9 {3 G    HDC hDesktopDC = GetDC(hDesktopWnd);$ m+ a. ?9 z7 P  ^) S! @
    HDC hCaptureDC = CreateCompatibleDC(hDesktopDC);) r" p: n( X) e
    HBITMAP hCaptureBitmap =CreateCompatibleBitmap(hDesktopDC, 8 o$ P# M5 {% d$ V
                            nScreenWidth, nScreenHeight);
0 z! ?8 U) Q, n    SelectObject(hCaptureDC,hCaptureBitmap);
% ~# E" F/ j) x) ~2 E7 r+ W1 r4 N    BitBlt(hCaptureDC,0,0,nScreenWidth,nScreenHeight,: v- G8 K" l  \6 t: E# q+ a
           hDesktopDC,0,0,SRCCOPY|CAPTUREBLT); 5 _) x, m9 @$ q% e4 C' P
    SaveCapturedBitmap(hCaptureBitmap); //Place holder - Put your code
/ H5 c; @/ }2 |& l2 n7 R0 h2 U1 L3 h; ^: n$ |/ L" e! D) C6 C
                                //here to save the captured image to disk! F/ k# q" }+ u9 U) }

% b) n  B( D7 O    ReleaseDC(hDesktopWnd,hDesktopDC);
$ _: z8 T7 ~; b6 H+ w! `( `6 A    DeleteDC(hCaptureDC);9 S$ k- @4 s# X" [# x
    DeleteObject(hCaptureBitmap);
" q! ~+ R) ~( F}
# |* r( s+ ]% @% j$ F3 F8 w8 V
# \" H! i- V. A; y% D. rIn the above code snippet, the function GetSystemMetrics() returns the screen width when used with SM_CXSCREEN, and returns the screen height when called with SM_CYSCREEN. Refer to the accompanying source code for details of how to save the captured bitmap to the disk and how to send it to the clipboard. Its pretty straightforward. The source code implements the above technique for capturing the screen contents at regular intervals, and creates a movie out of the captured image sequences.
. }5 B- O5 |# z* j# dAnd the DirectX way of doing it
" ~5 }' L" l4 {2 p/ U
( e+ c' U5 h; m0 {: a' ICapturing the screenshot with DirectX is a pretty easy task. DirectX offers a neat way of doing this.0 e$ Z  U7 Y' \% a" h6 a) K5 r

# [1 y* I# _. q. J1 C8 xEvery DirectX application contains what we call a buffer, or a surface to hold the contents of the video memory related to that application. This is called the back buffer of the application. Some applications might have more than one back buffer. And there is another buffer that every application can access by default - the front buffer. This one, the front buffer, holds the video memory related to the desktop contents, and so essentially is the screen image.
' Z% b  S: p2 G0 L0 s+ J# S
$ z1 h6 V7 O7 q6 OBy accessing the front buffer from our DirectX application, we can capture the contents of the screen at that moment.
( H2 p, p' [/ k. }
& C8 c4 I9 `* R! ^, oAccessing the front buffer from the DirectX application is pretty easy and straightforward. The interface IDirect3DDevice9 provides the GetFrontBufferData() method that takes a IDirect3DSurface9 object pointer and copies the contents of the front buffer onto that surface. The IDirect3DSurfce9 object can be generated by using the method IDirect3DDevice8::CreateOffscreenPlainSurface(). Once the screen is captured onto the surface, we can use the function D3DXSaveSurfaceToFile() to save the surface directly to the disk in bitmap format. Thus, the code to capture the screen looks as follows:
: D5 z4 X$ M$ J1 s- U- iextern IDirect3DDevice9* g_pd3dDevice;
+ A3 N8 T& @1 c* C  o; T' BVoid CaptureScreen()! X. Y. q# w  T) ]7 F! Z* ^! j: }, Y
{
% W$ m8 R: X# A, x" [; Z* H    IDirect3DSurface9* pSurface;
) R8 H9 p6 a# K' g; R7 L# Z. r    g_pd3dDevice->CreateOffscreenPlainSurface(ScreenWidth, ScreenHeight,8 W. }1 e# _# B: K  c& U
        D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH, &pSurface, NULL);
2 p3 F* ^  N& O0 X$ ^    g_pd3dDevice->GetFrontBufferData(0, pSurface);# }1 \  O: \& I' q9 ]5 I
    D3DXSaveSurfaceToFile("Desktop.bmp",D3DXIFF_BMP,pSurface,NULL,NULL);$ L4 v" S) {! {6 u2 r; }
    pSurface->Release(); , A0 M4 h, @5 a3 I8 A5 |
}
; }) a1 a$ l7 A# U( N0 v; l, G5 m. L; D& C+ I9 K; }8 Q
In the above, g_pd3dDevice is an IDirect3DDevice9 object, and has been assumed to be properly initialized. This code snippet saves the captured image onto the disk directly. However, instead of saving to disk, if we just want to operate on the image bits directly - we can do so by using the method IDirect3DSurface9::LockRect(). This gives a pointer to the surface memory - which is essentially a pointer to the bits of the captured image. We can copy the bits to our application defined memory and can operate on them. The following code snippet presents how the surface contents can be copied into our application defined memory:
" F3 m8 E6 H7 v& ~/ qextern void* pBits;
4 v! E, x; u3 [  ^& Z  Z4 Dextern IDirect3DDevice9* g_pd3dDevice;
2 p" c# e, H: A2 a3 }$ W) ^0 MIDirect3DSurface9* pSurface;9 M$ M' z/ j5 V, G8 W! x/ f5 L- k
g_pd3dDevice->CreateOffscreenPlainSurface(ScreenWidth, ScreenHeight,1 ~' G# G) W2 M0 m( |  \
                                          D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH, , T1 |: _$ _4 q) i' e. \, A2 l
                                          &pSurface, NULL);
5 ?% m' U" U+ }' h( `g_pd3dDevice->GetFrontBufferData(0, pSurface);: J, A% a) T3 z. l' s3 q# Y
D3DLOCKED_RECT lockedRect;
' r1 }: L1 C1 F( kpSurface->LockRect(&lockedRect,NULL," I/ k# G, u; X- F9 C! s
                   D3DLOCK_NO_DIRTY_UPDATE|1 K5 U+ ~2 y, N' u. m9 e
                   D3DLOCK_NOSYSLOCK|D3DLOCK_READONLY)));- N9 w3 k7 J. d# j3 X3 e
for( int i=0 ; i < ScreenHeight ; i++)4 \3 m# w" R4 p. H
{
! m" A9 G4 j+ S5 e0 e    memcpy( (BYTE*) pBits + i * ScreenWidth * BITSPERPIXEL / 8 ,
$ G+ H2 T/ y* a. u5 e& w        (BYTE*) lockedRect.pBits + i* lockedRect.Pitch ,
8 D2 g( z' Y5 _. o        ScreenWidth * BITSPERPIXEL / 8);
$ S" |8 I8 _8 X}
7 w/ g/ H1 E2 m+ _4 e) G& T( bg_pSurface->UnlockRect();/ Y" V5 D8 N9 [! @
pSurface->Release();6 N" L' G5 c: i) K/ J) _
8 h5 Z/ t1 L, d6 l, ~% R
In the above, pBits is a void*. Make sure that we have allocated enough memory before copying into pBits. A typical value for BITSPERPIXEL is 32 bits per pixel. However, it may vary depending on your current monitor settings. The important point to note here is that the width of the surface is not same as the captured screen image width. Because of the issues involved in the memory alignment (memory aligned to word boundaries are assumed to be accessed faster compared to non aligned memory), the surface might have added additional stuff at the end of each row to make them perfectly aligned to the word boundaries. The lockedRect.Pitch gives us the number of bytes between the starting points of two successive rows. That is, to advance to the correct point on the next row, we should advance by Pitch, not by Width. You can copy the surface bits in reverse, using the following:
% E  S4 H5 ^) V- S6 i  U2 Jfor( int i=0 ; i < ScreenHeight ; i++)
+ l$ t) H8 X+ i9 Y- q{
8 g6 |9 B0 O& `' L& E4 d+ s    memcpy((BYTE*) pBits +( ScreenHeight - i - 1) * 3 f1 v' D. q, P5 d# `  k
        ScreenWidth * BITSPERPIXEL/8 , $ j" q- ^/ j; _3 f0 _* i
        (BYTE*) lockedRect.pBits + i* lockedRect.Pitch , 1 C; e1 B4 J5 ~5 i# [+ t3 K
        ScreenWidth* BITSPERPIXEL/8);
8 ~5 M* o* p% K/ x( t+ O% l}
" c8 ^$ R; W5 P; g3 H+ p
; s$ z# A, S9 QThis may come handy when you are converting between top-down and bottom-up bitmaps.# t5 e; n( V  x8 O1 j+ U
2 z& w; ^7 ]9 D) K4 k" @: b2 c! i
While the above technique of LockRect() is one way of accessing the captured image content on IDirect3DSurface9, we have another more sophisticated method defined for IDirect3DSurface9, the GetDC() method. We can use the IDirect3DSurface9::GetDC() method to get a GDI compatible device context for the DirectX image surface, which makes it possible to directly blit the surface contents to our application defined DC. Interested readers can explore this alternative.
- _) ?, R& U) H' t9 Z$ j2 n& w* \; Z0 k4 @: c/ n) `
The sample source code provided with this article implements the technique of copying the contents of an off-screen plain surface onto a user created bitmap for capturing the screen contents at regular intervals, and creates a movie out of the captured image sequences.( X, S( C- T8 l7 x

& c- e) o2 G# p+ r4 p; I7 ]However, a point worth noting when using this technique for screen capture is the caution mentioned in the documentation: The GetFrontBufferData() is a slow operation by design, and should not be considered for use in performance-critical applications. Thus, the GDI approach is preferable over the DirectX approach in such cases.! ^6 `. S, c3 z+ M7 o' a
Windows Media API for capturing the screen
# {/ P; o. r" ~) h- |) [
' n, e* V0 ~3 V8 G/ o% aWindows Media 9.0 supports screen captures using the Windows Media Encoder 9 API. It includes a codec named Windows Media Video 9 Screen codec that has been specially optimized to operate on the content produced through screen captures. The Windows Media Encoder API provides the interface IWMEncoder2 which can be used to capture the screen content efficiently.8 R, u9 y6 [/ l! v' `
4 ?- O/ d! l, a& w
Working with the Windows Media Encoder API for screen captures is pretty straightforward. First, we need to start with the creation of an IWMEncoder2 object by using the CoCreateInstance() function. This can be done as:
( F( X) {9 ^0 J$ k5 fIWMEncoder2* g_pEncoder=NULL; + W: y2 l, k3 `' p
CoCreateInstance(CLSID_WMEncoder,NULL,CLSCTX_INPROC_SERVER,
& G% F, `. ]# M8 X7 D        IID_IWMEncoder2,(void**)&g_pEncoder);
- l% q" p& w# z7 n- W9 {% ?, P0 I4 B7 d; t! }+ C1 m5 C
The Encoder object thus created contains all the operations for working with the captured screen data. However, in order to perform its operations properly, the encoder object depends on the settings defined in what is called a profile. A profile is nothing but a file containing all the settings that control the encoding operations. We can also create custom profiles at runtime with various customized options, such as codec options etc., depending on the nature of the captured data. To use a profile with our screen capture application, we create a custom profile based on the Windows Media Video 9 Screen codec. Custom profile objects have been supported with the interface IWMEncProfile2. We can create a custom profile object by using the CoCreateInstance() function as:& ^2 _4 Y6 W9 e  k$ A2 S
IWMEncProfile2* g_pProfile=NULL;+ V+ n3 D" k& S, N0 [. U2 k9 W. P
CoCreateInstance(CLSID_WMEncProfile2,NULL,CLSCTX_INPROC_SERVER,
8 y% S  A. X" I5 U# [; f        IID_IWMEncProfile2,(void**)&g_pProfile);
' Z. S4 N/ x$ _: x1 b
8 P; j) r( J$ [9 k% UWe need to specify the target audience for the encoder in the profile. Each profile can hold multiple number of audience configurations, which are objects of the interface IWMEncAudienceObj. Here, we use one audience object for our profile. We create the audience object for our profile by using the method IWMEncProfile::AddAudience(), which would return a pointer to IWMEncAudienceObj which can then be used for configurations such as video codec settings (IWMEncAudienceObj::put_VideoCodec()), video frame size settings (IWMEncAudienceObj::put_VideoHeight() and IWMEncAudienceObj::put_VideoWidth()) etc. For example, we set the video codec to be Windows Media Video 9 Screen codec as:
! I+ \4 c* P4 J+ ~; {extern IWMEncAudienceObj* pAudience;
: s: r" H& `# B( u! l0 Q#define VIDEOCODEC MAKEFOURCC('M','S','S','2')
; P2 C6 D* Z. `0 G! c    //MSS2 is the fourcc for the screen codec
; E5 g+ R3 V3 ^  j$ @0 B3 w* a; L9 l  B" J  |3 v! f0 P! X
long lCodecIndex=-1;
6 {# P; {% F! l0 pg_pProfile->GetCodecIndexFromFourCC(WMENC_VIDEO,VIDEOCODEC,
1 T% T% H4 r2 Y( n. U    &lCodecIndex); //Get the Index of the Codec
% n: D8 o2 l2 c( c6 U/ `7 H- D8 z
pAudience->put_VideoCodec(0,lCodecIndex);7 o) P  f! e: t

( Q  u+ t! l3 {  GThe fourcc is a kind of unique identifier for each codec in the world. The fourcc for the Windows Media Video 9 Screen codec is MSS2. The IWMEncAudienceObj::put_VideoCodec() accepts the profile index as the input to recognize a particular profile - which can be obtained by using the method IWMEncProfile::GetCodecIndexFromFourCC()." o" z) u' P* w  f: s! S
+ k5 V) j& \+ {+ Q
Once we have completed configuring the profile object, we can choose that profile into our encoder by using the method IWMEncSourceGroup :: put_Profile() which is defined on the source group objects of the encoder. A source group is a collection of sources where each source might be a video stream or audio stream or HTML stream etc. Each encoder object can work with many source groups from which it get the input data. Since our screen capture application uses only a video stream, our encoder object need to have one source group with a single source, the video source, in it. This single video source needs to configured to use the Screen Device as the input source, which can be done by using the method IWMEncVideoSource2::SetInput(BSTR) as:3 c5 U+ ?# H: O+ e+ `
extern IWMEncVideoSource2* pSrcVid;2 P4 T7 {4 L% m/ r! D7 H  Q
pSrcVid->SetInput(CComBSTR("ScreenCap://ScreenCapture1");
: u5 y3 d0 L# j. W' [/ _) Y6 s  o% p  p  x
The destination output can be configured to save into a video file (wmv movie) by using the method IWMEncFile::put_LocalFileName() which requires an IWMEncFile object. This IWMEncFile object can be obtained by using the method IWMEncoder::get_File() as:5 |; ?6 j8 X, D; a
IWMEncFile* pOutFile=NULL;
+ T% h1 X% f, i7 X8 X* G/ rg_pEncoder->get_File(&pOutFile);' L! p; S; J, ]% y) J+ H0 `- q% m- h
pOutFile->put_LocalFileName(CComBSTR(szOutputFileName);
- U$ Z" {# k$ d& [. c  i' K7 Q. A2 t! \' x3 z+ h7 G" Q) _( w
Now, once all the necessary configurations have been done on the encoder object, we can use the method IWMEncoder::Start() to start capturing the screen. The methods IWMEncoder::Stop() and IWMEncoder::Pause might be used for stopping and pausing the capture.
7 w( }* ~6 E! X% A$ W
4 C% W' p8 ^6 M0 ?" GWhile this deals with full screen capture, we can alternately select the regions of capture by adjusting the properties of input video source stream. For this, we need to use the IPropertyBag interface of the IWmEnVideoSource2 object as:
$ t- w' l, K, U5 F& W#define WMSCRNCAP_WINDOWLEFT CComBSTR("Left")
2 Z& |- m! a; r1 S1 f8 A1 N#define WMSCRNCAP_WINDOWTOP CComBSTR("Top")
- G/ u* T9 w  F+ O#define WMSCRNCAP_WINDOWRIGHT CComBSTR("Right"); E4 M! ~. k$ T1 k7 ?8 R9 T6 E4 U
#define WMSCRNCAP_WINDOWBOTTOM CComBSTR("Bottom"): H2 f( e$ h' ?9 d4 b6 T
#define WMSCRNCAP_FLASHRECT CComBSTR("FlashRect")
, z1 K$ h, J& S% ~#define WMSCRNCAP_ENTIRESCREEN CComBSTR("Screen")
" Y) q+ X9 {- a. F- |( T7 o1 l#define WMSCRNCAP_WINDOWTITLE CComBSTR("WindowTitle")
( D" Y6 N: [3 M4 Y: a6 ]  wextern IWMEncVideoSource2* pSrcVid;5 z( }: W% y( y) k9 p1 p1 v9 w: {
int nLeft, nRight, nTop, nBottom;* o3 V" G. }) A3 a3 o
pSrcVid->QueryInterface(IID_IPropertyBag,(void**)&pPropertyBag);
& L1 P0 R$ O/ t5 W( t# t( Q- xCComVariant varValue = false;+ M3 b' M7 p2 E; [( c
pPropertyBag->Write(WMSCRNCAP_ENTIRESCREEN,&varValue);) d, p, n  d9 ^
varValue = nLeft;
1 Q" ^. S( L& D9 opPropertyBag->Write( WMSCRNCAP_WINDOWLEFT, &varValue );
0 Z7 W% u# e, y" BvarValue = nRight;( S. w3 u3 j9 e1 t. h! Z
pPropertyBag->Write( WMSCRNCAP_WINDOWRIGHT, &varValue );
* b0 \# }% j  [/ V9 `varValue = nTop;6 z; j1 e: E* ]+ C. \
pPropertyBag->Write( WMSCRNCAP_WINDOWTOP, &varValue );' P, A' G1 ~  l
varValue = nBottom;
% B0 H# }! h$ I0 @' d9 p% epPropertyBag->Write( WMSCRNCAP_WINDOWBOTTOM, &varValue );
' c  m+ {9 E; d4 O6 i- fhttp://www.codeproject.com/KB/dialog/screencap.aspx$ y7 B* ?3 ]! P3 O* E
6 D  ?8 n  g8 T* W% ~
The accompanied source code implements this technique for capturing the screen. One point that might be interesting, apart from the nice quality of the produced output movie, is that in this, the mouse cursor is also captured. (By default, GDI and DirectX are unlikely to capture the mouse cursor).$ t$ z! z$ x! O8 a7 u
" ~9 t& |2 a* Q4 k5 ]7 L
Note that your system needs to be installed with Windows Media 9.0 SDK components to create applications using the Window Media 9.0 API.
5 M& j$ T  s4 D1 R+ D# G- R; |& N' _2 {5 I
To run your applications, end users must install the Windows Media Encoder 9 Series. When you distribute applications based on the Windows Media Encoder SDK, you must also include the Windows Media Encoder software, either by redistributing Windows Media Encoder in your setup, or by requiring your users to install Windows Media Encoder themselves.3 \9 n- L7 x4 X

2 X7 D6 \6 p3 m* CThe Windows Media Encoder 9.0 can be downloaded from:
6 a+ S4 x* c% ^9 E& E5 m, WWindows Media Encoder- A: @6 J5 c) I1 E9 y8 z  G
Conclusion
) x, W( U, ^6 [' y) `# f& o3 t0 B0 L* h
All the variety of techniques discussed above are aimed at a single goal - capturing the contents of the screen. However, as can be guessed easily, the results vary depending upon the particular technique that is being employed in the program. If all that we want is just a random snapshot occasionally, the GDI approach is a good choice, given its simplicity. However, using Windows Media would be a better option if we want more professional results. One point worth noting is, the quality of the content captured through these mechanisms might depend on the settings of the system. For example, disabling hardware acceleration (Desktop properties | Settings | Advanced | Troubleshoot) might drastically improve the overall quality and performance of the capture application.
5 Z# e8 K, a/ n7 x! j9 e/ g关于API HOOK和图形驱动Mirror Driver,可以看看这篇文章:  A4 v% T+ g; _# y3 I
引用
8 p0 Y1 \: F/ ?0 k+ o6 N  Ihttp://www.cnblogs.com/niukun/archive/2008/03/08/1096601.html
' L, A5 q- \3 J4 ]5 u9 Y# M' T& Y0 o. ~# s" j
屏幕录制,远程桌面传输,基于Windows图形驱动的屏幕截图技术 6 P$ f: A. ]: \+ B
  
' U) P, G+ t- s. h' g( y2 @, N# v% S, s( {9 W# G& p! c
计算机屏幕图像的截取在屏幕的录制、计算机远程控制以及多媒体教学软件中都是关键术,基于Windows操作系统有多种截屏方法,研究的重点集中在如何快速有效的截取DBI(Device-Independent Bitmap)格式的屏幕图形数据。现在商业软件流行的截屏技术主要采取的Api Hook技术,但这种技术一次截屏仍有较大的时间消耗,这样就对运行软件的硬件仍有较多的限制,而且是一种非标准的技术,不为微软公司所推荐。
* P5 R& o& e- n3 w2 A5 q) V$ x7 {9 X5 m8 ?6 W+ x, A3 k  p
1截屏技术
+ I' }3 E4 J* M+ q0 M; H* F* t1 ~6 z! G: M
1.1使用api hook技术( }, W& G0 G  j' n! M

% V1 T! s3 }# ?$ R0 G         使用api hook技术截屏基于一下的原理;多数屏幕图形的绘制都是通过调用用户态gdi32.dll中的绘图函数实现的,如果利用api hook技术拦截系统中所有对这些函数的调用,就可以得到屏幕图形刷新或变化的区域坐标;然后使用api函数bitblt将刷新或者变化后的屏幕区域的ddb格式的位图拷贝到内存中,接着使用函数getbits将ddb位图转换为dbi位图,最后压缩、存储或者传输。  k' `: O! D, D/ u$ e  w% t' d: A) @

4 d  t/ `( o4 n5 W( J# w" j! C% `         这种方案只有捕捉到变化,才进行截屏。这样每次截屏都是有效的操作。每次(第一次除外)仅截取了栓新或变化部分,从根本上解决了数据量大的问题。但是这种技术仍然有一下缺点:1实际截屏采用的api函数,截取的是ddb位图,要经过一次格式转换,耗时较大。2如果屏幕变化区域矩形的坐标r1、r2、……rn相继到达,为了不是屏幕变化的信息丢失,通常不是逐个截取,而是将矩形坐标合并,这样就可以截取并未变化的区域,不经增加截屏的时间消耗,而且产生冗余数据。3该技术不支持directdraw技术,由于应用程序可能使用directdraw驱动程序进行直接操纵显示内存、硬件位块转移,硬件重叠和交换表面等图形操作,而不必进行gdi调用,所以此时api hook技术将失去效用,不能捕捉屏幕变化。4api hook技术在屏幕取词,远程控制和多媒体教学中都有实际的应用,但是这种技术是一种非标准的技术,微软公司并不推荐使用。& m7 @$ D. ?- ]8 t0 s  l! s

5 H; u8 M$ L/ w. i, W* _* E1.2 使用图形驱动技术 + K- U2 k3 m" J) }7 ?& Z+ }* ]

2 N0 ?* w5 y8 E该技术的原理:应用程序调用win32 gdi 函数进行图形输出请求,这个请求通过核心模式gdi发送。核心模式gdi把这些请求发送到相应的图形驱动程序。如,显示器驱动程序,通信流如图。现将该技术详细解释如下:0 Z1 I8 p: `3 u$ b$ H6 K- ]( A, h7 T

- ?- _) x' ^: N8 Y8 q: I# e- h(1)显示器驱动输出一系列设备驱动程序接口DDI(Device Driver Interface)函数供GDI调用。信息通过这些入口点的输入/输出参数在GDI和驱动程序之间传递。, h. x& ^& T% c5 p

& U% q3 R7 X' Q& F(2)       在显示器驱动加载时,GDI调用显示器驱动程序函数DrvEnableDriver,在这里我们向GDI提供显示器驱动支持的,可供GDI调用的DDI函数入口点,其中部分时将要Hook的图形输出函数。- ]9 c+ W; u  B

: E9 _% c+ P, q(3)       在GDI调用函数DrvEnableDriver成功返回后,GDI调用显示器驱动的DrvEnablePDEV函数,在这里可以设置显示器的显示模式,然后创建一个PDEV结构,PDEV结构是物理显示器的逻辑表示。
+ f* e5 `7 F5 d8 o6 D7 h  Q6 E6 X6 ?6 t& r! d% Z# b
(4)       在成功创建PDEV结构之后,显示驱动为视频硬件创建一个表面,该表面可以是标准的DIB位图管理表面,然后驱动程序使该表面与一个PDEV结构相关联,这样显示驱动支持的所有绘画操作都将在该DIB位图表面上进行。) |+ p& F3 ?! ~8 `' E

  ?% \5 k; ]" c$ M) T(5)       当应用程序调用用户态GDI32.DLL中的绘图函数发出图形请求时,该请求将图形引擎通过相应的DDI函数发送到显示驱动,显示驱动程序将这次图形变化事件通知应用程序。# \& i! A/ S/ h- ?3 j; i# Z/ u
" Y0 ^% k2 E6 b9 O
(6)       应用程序接受到通知后,调用函数ExtEscape发出一个请求,并通过参数传递一个缓冲区Buffer,图形引擎调用DDI函数DrvEscape处理应用层的ExtEscape调用,将变化部分的图形数据从其创建的表面拷贝Buffer,这样数据就从核心层图形驱动传到应用层。2 c% `3 N- d5 U- G' P2 Z& B4 d
5 v  U3 B! m" d' r$ V) p
(7)       应用程序接收到的图形数据已是DIB标准格式,所以可以直接进行压缩传输或储存。
, n* r* ^2 \9 r( r! b% _! ^1 c* m8 W0 J9 d0 O) }9 p8 _
1.3图形驱动技术的特点: X, n" c5 l: M2 w( x6 }
' D: r. l8 N( @, J! M+ ?$ ~$ Z9 W
         上面叙述了采用图形驱动实现屏幕实现截屏的原理和过程,可以看出这种技术涉及核心图形驱动的编写,实现上较为复杂,而其具备的优点主要为:  d7 s3 B; ^+ Q5 K7 ?- [+ ], Q  P! T6 F
4 X$ A% T* l7 w8 c$ }
(1)       驱动技术只截取变化的屏幕区域,这一点与API Hook技术相当;但驱动技术是一种标注技术,为微软公司所推荐。! z/ B& h  z/ P) ~. k" m& R

/ D9 }+ i, f0 ]! B% z# H6 D(2)       API Hook技术在实际截屏时,采用API函数实现,截取DDB位图,必须经过一次DDB到DIB的转换;而驱动技术直接从其管理的DIB位图(表面)中将截取区域的图形数据拷贝到应用程序,显著的降低了一次截屏的时间消耗。5 g9 H; A+ S: P
2 Q6 J# s3 j. ~, d
(3)       如果屏幕图形小区域范围变化较快,屏幕变化区域矩形坐标R1、R2、R3……、Rn相继到达,由于一次截屏时间消耗降低,区域矩形坐标叠加的概率变小,这样屏幕变化区域及时的得到了处理,不仅增加了连续性,而且截屏时间消耗和产生的数据量一般不会出现峰值,这也是这种技术的优越之处。
) |6 K* p+ ?) K1 y. j# ?5 D- t% U% \# A6 P: {% X0 }
经过以上对比,无论是做远程桌面还是屏幕录制,基于MirrorDriver的屏幕截取将会是一个不错的选择,无论从性能占用资源的大小(主要是cpu),取得的数据量来说都要优于Hook。
' V; X" l, o; i, p最近在做远程桌面的传输,所以有必要研究一下Mirror,这项技术在很多软件中都有应用。但是开源的driver我还没有看见过,因为没有精力去编写。所以才用网上的免费的driver同时也提供了api文档。
7 T: s* @6 |0 R1 i# _3 _4 \driver内部实现的原理大致就是把显示输出拷贝到一个缓冲区当中,并且记录每次屏幕更新的矩形区域。根据这些输出,应用程序就很容易得到缓冲区中的数据了。  L9 a; f8 \: r7 _. M+ c# u
代码还在调试,等写完了,上源码!!!0 \6 F# `2 I0 P# ?% U" G7 U
看了这两篇文章,我想对于屏幕监控的实现应该有个概念了吧。当然,对于像Mirror Driver这样高深的东西我现在还没有入门。我主要研究了下GDI的实现。
- J7 Z& _9 _3 X+ p# y/ M* N7 P) S8 R3 e1 o
说到这里,我们不能不提到大名鼎鼎的Radmin,对于早期的版本,它就是通过隔行扫描实现的。图象非常流畅,网络通信量非常小,CPU占用较低。我也搜集了不少木马程序的屏幕监控,发现很少有超过Radmin的。大概说来,目前使用GDI函数有这样几种方法:隔行扫描,差异比较,XOR算法,最小矩形算法,“格”算法等等。我着重分析了我认为两个比较优秀的XOR算法和隔行扫描。' I' @3 Y. n2 v6 k( f; r& P' C
1 ]: Q, C" R3 a  F* L, q/ V
先说说XOR(异或)算法吧。XOR算法具有实现简单,画面质量高,网络通信量小等明显优点,不足之处在于CPU消耗相对较高,每秒刷新帧数有限,在本机测试时最快可以达到40帧/秒。它的实现方法就是不停的截屏,然后将后一屏的图象数据与前一屏的数据逐个进行xor处理,因为屏幕变化的区域通常较少,所以得到的结果将是大量的0。再采用ZLib算法等压缩,压缩率将非常高。目前东南网安远程控制软件SEU_Peeper 0.11 beta3就是采用该算法。4 ?( M4 q  D9 t
& G/ b9 Q* s6 Z" p% A3 [! y! P8 p! D
再来说说隔行扫描。因为最近弄到一份Gh0st的代码,其中的屏幕传输就是用的隔行扫描。其实现就是每隔一定的行(比如10行)从上往下扫描整个屏幕,根据扫描的结果推测屏幕变化的区域,然后仅截取变化的区域发送。当然,发送前也可以进行压缩以减少通信量。隔行扫描可以达到60帧/秒以上。故反应十分敏捷。其通信量是XOR(异或)算法的两倍。但CPU占用率极低。
: p9 G; F' D6 K) K" q! V
1 |1 e& U4 p' @* ~) Z这几天仔细看了下Gh0st屏幕传输的代码,其实隔行扫描很像是在不停的给屏幕打补丁。用一个接一个小补丁来逐渐更新屏幕的变化。其实Gh0st的隔行扫描也存在一些问题。尽管刷新的频率很高,但是屏幕在运动的时候仍然不是很流畅。可以看到明显的碎片。细看它的代码可以发现,它并不是扫描完整个屏幕再来分析的,而是扫到一个改变,就适当的扩大区域,发送数据。这样一是造成区域重复发送,二是造成大量碎片。6 {' a3 Y0 N, n

! B$ b: S/ |( M3 q  ?4 D# ?3 J针对这个问题,我试图改进算法,我先是采用将屏幕扫描完后,得到一个大的改变区域发送,结果发现虽然碎片的问题解决了,但cpu占用却上去了。另外,我尝试对屏幕进行分块,然后来推测改变的区域,结果发现碎片问题没有根本解决,cpu占用也增加了。% x$ g2 c6 D4 J/ E, T

. {! i2 ~& H3 J8 U也想不到其它的好方法。先写到这里吧。
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 分享分享 很美好很美好 很差劲很差劲
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

冒险解谜游戏中文网 ChinaAVG

官方微博官方微信号小黑屋 微信玩家群  

(C) ChinaAVG 2004 - 2019 All Right Reserved. Powered by Discuz! X3.2
辽ICP备11008827号 | 桂公网安备 45010702000051号

冒险,与你同在。 冒险解谜游戏中文网ChinaAVG诞生于2004年9月9日,是全球华人共同的冒险解谜类游戏家园。我们致力于提供各类冒险游戏资讯供大家学习交流。本站所有资源均不用于商业用途。

快速回复 返回顶部 返回列表