|
Thanks a lot! This looks exactly like the kind of library I was looking for.
|
|
|
|
|
I have a set of SPRs and MAPs files in my harddisk, which I have uploaded all of them to OneDrive in the shared folder. You can download some of them or all of them from the following link:
There is the link
LoadImage fails to open these files directly, even though I change their extension to bmp, because they are not in the format of bmp, but I know for sure that these are image files and they are convertible to bmp files. In other words, there is a way or algorithm to convert all these files to bmp files, so LoadImage succeeds and BitBlt renders the images to my window. But the problem is that I don't know how to convert these files to bmp files. When I invoke CreateBitmap function, to create new 32 bit BGRA bitmap, open them by invoking the CreateFile function, reading all their bytes to an array of bytes, also called "buffer", by invoking ReadFile function, then CloseHandle, then SetBitmapBits or SetDIBits, and finally BitBlt (after CreateCompatibleDC and SelectObject), I see unclear image on my window. Later I figured out that these images are 8 bit color, in the SPR files, the first byte is the width of the image, and the second byte is the height of the image, and the rest of the bytes are the pixels of the image, each byte for each pixel. MAP files are always 320x200 pixels image, so all their bytes are the pixels of the MAP image itself, the same as in the SPR files. When I read one byte, I set all channels of the bitmap to this byte. Then BitBlt renders the bitmap and I can see the image clearly, but it is grayscaled. But I want to see it colorful. When I use 2 bits for Blue, 3 bits for Green and 3 bits for Red, I see the image in the wrong colors. I know how it suppose to look like in DOSBox. I need help about how to convert and correct the colors, rather than shape. Shape is fine. I want to convert the SPR and MAP images to 32 bit BGRA colorful and not grayscaled bitmap. That's my purpose.
modified 4-May-17 16:09pm.
|
|
|
|
|
You cannot just read the files and expect the system to figure out what the content is. You need to reformat the data into proper bitmap structures before you can display them. See BITMAP structure (Windows)[^] .
|
|
|
|
|
According to SPR files | Creatures Wiki | Fandom powered by Wikia[^] the colour values are palette indexes:
Quote: The actual image data is simply color indexes arranged into horizontal scanlines. The mappings between color indices and 256 RGB colors is given in the PALETTE.DTA[^] file; Multiply the red, green and blue results returned by 4 to gain the full colour range. If you don't have the palette file you can try to use the default palette provided in the link.
So you have to load the SPR file into memory, create a bitmap of the same size, and set the bitmap pixels to the RGB values retrieved from the colour table using the colour index from the SPR pixels.
|
|
|
|
|
Strange. The game doesn't come with it's own palette file .DTA, and I have tried the default one, but the images are still in the wrong colors.
|
|
|
|
|
I'm sorry that I can't help further because I don't know about your game. You can try to search the web for more information about the used colour tables and where they are stored.
|
|
|
|
|
I Urgently require source code for computer graphics mini project on rotation of a leaf in c language and in opengl.
|
|
|
|
|
Then you need to get writing. No one here is going to do your work for you.
|
|
|
|
|
I urgently require you to start working.
We do not write code on request.
Patrice
“Everything should be made as simple as possible, but no simpler.” Albert Einstein
|
|
|
|
|
I need help to Point me in the right direction.
At work they have asked me to create a program to read scanning data and then create a timber log in 3D based on the scanned data. And they need it in Direct X , not Open Gl or in WPF or something like that.
I have been searching and there pleanty of diffrent approaches with frameworks such as Unity and so on.
How would you approach this ?
|
|
|
|
|
I would start this by gathering all the requirements and breaking them down. For instance, what format does the scanning data come in as (is it an obj file for instance)? Do you have to do this yourself or are you getting it from another source? It's always a mistake to think about what coding techniques you need if you don't have an idea about what it is you're being asked to build.
This space for rent
|
|
|
|
|
I have Everything well documented and i have a working solution to get scanner data. I get a list with the Diameters for every 2 cm of the log.
But my problems is how to approach the 3D.
Since they want me to do it in DirectX and not Open GL and they want me to select one of the frameworks out there such as Unity or any other.
The problem is that i have no clue what package to select ? Unity ?
|
|
|
|
|
Hi,
i properly have coded a bitmap to be drawn in a control child of a dialogue in WINAPI.
Now i want to fit the bitmap, that has to be bigger than the window size, in specific coordinates, lateron i want to make controls that move the picture in the window.
My problem is i really don't know where i can set the variables who arrange the picture.
Here's the code (only the main, the headers are not neccessary):
#if defined(UNICODE) && !defined(_UNICODE)
#define _UNICODE
#elif defined(_UNICODE) && !defined(UNICODE)
#define UNICODE
#endif
#include <tchar.h>
#include <windows.h>
#include <windowsx.h>
#include <d3d9.h>
#include <d3dx9.h>
#include "Resource.h"
#pragma comment (lib, "d3d9.lib")
LRESULT CALLBACK WindowProcedure (HWND, UINT, WPARAM, LPARAM);
BOOL CALLBACK Raumaktionproc (HWND, UINT, WPARAM, LPARAM);
TCHAR szClassName[ ] = _T("CodeBlocksWindowsApp");
LPDIRECT3D9 d3d;
LPDIRECT3DDEVICE9 d3ddev;
LPDIRECT3DVERTEXBUFFER9 g_pVertexBuffer = NULL;
LPDIRECT3DTEXTURE9 g_pTexture = NULL;
#define D3DFVF_CUSTOMVERTEX ( D3DFVF_XYZ | D3DFVF_TEX1 )
struct Vertex
{
float x, y, z;
float tu, tv;
};
Vertex g_quadVertices[] =
{
{-1.0f, 1.0f, 0.0f, 0.0f,0.0f },
{ 1.0f, 1.0f, 0.0f, 1.0f,0.0f },
{-1.0f,-1.0f, 0.0f, 0.0f,1.0f },
{ 1.0f,-1.0f, 0.0f, 1.0f,1.0f }
};
static HWND hwndRaumkarte;
void initD3D(HWND hwndRaumkarte);
void render_frame(void);
void cleanD3D(void);
void loadTexture(void);
#define ID_StartDialog 1
int WINAPI WinMain (HINSTANCE hThisInstance,
HINSTANCE hPrevInstance,
LPSTR lpszArgument,
int nCmdShow)
{
HWND hwnd;
MSG messages;
WNDCLASSEX wincl;
memset(&messages,0,sizeof(messages));
wincl.hInstance = hThisInstance;
wincl.lpszClassName = szClassName;
wincl.lpfnWndProc = WindowProcedure;
wincl.style = CS_DBLCLKS;
wincl.cbSize = sizeof (WNDCLASSEX);
wincl.hIcon = LoadIcon (NULL, IDI_APPLICATION);
wincl.hIconSm = LoadIcon (NULL, IDI_APPLICATION);
wincl.hCursor = LoadCursor (NULL, IDC_ARROW);
wincl.lpszMenuName = NULL;
wincl.cbClsExtra = 0;
wincl.cbWndExtra = 0;
wincl.hbrBackground = (HBRUSH) COLOR_BACKGROUND;
if (!RegisterClassEx (&wincl))
return 0;
hwnd = CreateWindowEx (
0,
szClassName,
_T("Code::Blocks Template Windows App"),
WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT,
CW_USEDEFAULT,
800,
600,
HWND_DESKTOP,
NULL,
hThisInstance,
NULL
);
ShowWindow (hwnd, nCmdShow);
while (GetMessage (&messages, NULL, 0, 0))
{
TranslateMessage(&messages);
DispatchMessage(&messages);
}
return messages.wParam;
}
LRESULT CALLBACK WindowProcedure (HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam)
{
static HINSTANCE hInstance;
switch (message)
{
case WM_CREATE:
{
CreateWindow(TEXT("button"),"Karte anzeigen",WS_CHILD|WS_VISIBLE,
20, 20, 200, 40, hwnd, (HMENU) ID_StartDialog, NULL,NULL);
}
break;
case WM_COMMAND:
switch (LOWORD(wParam))
{
case ID_StartDialog:
{
DialogBox(hInstance, MAKEINTRESOURCE(IDD_DIALOG1),hwnd, Raumaktionproc);
InvalidateRect(hwnd,NULL, TRUE);
}
}
break;
case WM_DESTROY:
PostQuitMessage (0);
break;
default:
return DefWindowProc (hwnd, message, wParam, lParam);
}
return 0;
}
BOOL CALLBACK Raumaktionproc(HWND hDlg1, UINT message, WPARAM wParam, LPARAM lParam)
{
PAINTSTRUCT ps;
HDC hdc;
RECT rect;
hdc=GetDC(hDlg1);
hwndRaumkarte=GetDlgItem(hDlg1, ID_RAUMKARTE);
switch (message)
{
case WM_INITDIALOG:
{
ShowWindow(GetDlgItem(hDlg1, IDD_DIALOG1), TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_OK),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_RAUMKARTE),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_VERTSCROLL),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_HORSCROLL),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_BASIS),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_FEUER),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_GEGNERAUSWAHL),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_KOM),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_FEUERBEENDEN),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_FEUERFLUCHT),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_LABWERFEN),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_LTAUSCHEN),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_MOBILINFO),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_NMOBIL),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_PLANET),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_SCAN),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_VMOBIL),TRUE);
ShowWindow(GetDlgItem(hDlg1, ID_ZIELAUSWAHLLISTE),TRUE);
EnableWindow(GetDlgItem(hDlg1, ID_GEGNERAUSWAHL),FALSE);
EnableWindow(GetDlgItem(hDlg1, ID_FEUERBEENDEN),FALSE);
EnableWindow(GetDlgItem(hDlg1, ID_FEUERFLUCHT),FALSE);
initD3D(hwndRaumkarte);
InvalidateRect(hDlg1, NULL, TRUE);
return FALSE;
};
case WM_PAINT:
{
render_frame();
}
break;
case WM_COMMAND:
switch (LOWORD (wParam))
{
case ID_OK:
{
MessageBox(hDlg1, TEXT ("Runde im All beendet"), "EXO 1.2", MB_ICONEXCLAMATION|MB_OK);
cleanD3D();
EndDialog(hDlg1,0);
return TRUE;
}
break;
}
break;
}
return FALSE;
}
void loadTexture()
{
D3DXCreateTextureFromFile( d3ddev, "Raumkarte.bmp", &g_pTexture );
d3ddev->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR);
d3ddev->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR);
}
void initD3D(HWND hwndRaumkarte)
{
d3d=Direct3DCreate9(D3D_SDK_VERSION);
D3DDISPLAYMODE d3ddm;
d3d->GetAdapterDisplayMode( D3DADAPTER_DEFAULT, &d3ddm );
D3DPRESENT_PARAMETERS d3dpp;
ZeroMemory(&d3dpp, sizeof(d3dpp));
d3dpp.Windowed=TRUE;
d3dpp.SwapEffect=D3DSWAPEFFECT_DISCARD;
d3dpp.hDeviceWindow=hwndRaumkarte;
d3dpp.BackBufferFormat = d3ddm.Format;
d3dpp.EnableAutoDepthStencil = TRUE;
d3dpp.AutoDepthStencilFormat = D3DFMT_D16;
d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE;
d3d->CreateDevice(D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
hwndRaumkarte,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,
&d3dpp,
&d3ddev);
loadTexture();
d3ddev->CreateVertexBuffer( 4*sizeof(Vertex), D3DUSAGE_WRITEONLY,
D3DFVF_CUSTOMVERTEX, D3DPOOL_DEFAULT,
&g_pVertexBuffer, NULL );
void *pVertices = NULL;
g_pVertexBuffer->Lock( 0, sizeof(g_quadVertices), (void**)&pVertices, 0 );
memcpy( pVertices, g_quadVertices, sizeof(g_quadVertices) );
g_pVertexBuffer->Unlock();
D3DXMATRIX matProj;
D3DXMatrixPerspectiveFovLH( &matProj, D3DXToRadian( 45.0f ),
1920.0f / 1080.0f, 0.1f, 100.0f );
d3ddev->SetTransform( D3DTS_PROJECTION, &matProj );
d3ddev->SetRenderState(D3DRS_LIGHTING, FALSE);
}
void render_frame(void)
{
d3ddev->Clear(0,NULL,D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_COLORVALUE(0.0f,0.0f,0.0f,1.0f), 1.0f, 0);
D3DXMATRIX matWorld;
D3DXMatrixTranslation( &matWorld, 0.5f, 0.5f, 1.0f );
d3ddev->SetTransform( D3DTS_WORLD, &matWorld );
d3ddev->BeginScene();
d3ddev->SetTexture( 0, g_pTexture );
d3ddev->SetStreamSource( 0, g_pVertexBuffer, 0, sizeof(Vertex) );
d3ddev->SetFVF( D3DFVF_CUSTOMVERTEX );
d3ddev->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2 );
d3ddev->EndScene();
d3ddev->Present(NULL,NULL,NULL,NULL);
}
void cleanD3D(void)
{
if( g_pTexture != NULL )
g_pTexture->Release();
if( g_pVertexBuffer != NULL )
g_pVertexBuffer->Release();
d3ddev->Release();
d3d->Release();
}
For now i need someone to tell me where i have to change the attributes to render the picture in different pixel- parts.
I am a newbie, i got two books, but none can really bring light to this question.
Where do i have to start? With the vertices? With the matrix? Or is it in the parts
of D3DXMatrixTranslation or D3DXMatrixPerspectiveFovLH?
Please help, thx,
Chris
|
|
|
|
|
Is it possible to get a sense of depth and 3D space (X,Y,Z planes) from an image or video? Is there an API that can handle this sort of processing? Perhaps with several images or source videos you could possibly determine 3D space and boundaries of an area in a series of photos or videos. I know they have photogrammetry software that can compile 3D objects based on a series of photos taken from different angles. I am wondering if this can be done to map out a terrain in an image or video (or series of them).
|
|
|
|
|
|
Hey Everyone, I am not new to application development in general. I have experience designing basic CAD plugins, but that is about it.
I would like to start a new project. I want to be able to draw and render 3D terrain based on information derived from small segments in Google Earth. How do I do that? I know other applications can do this, like Rhino. So it is possible. The target OS for this functionality is Windows 10 (Desktop).
|
|
|
|
|
You first go to the Google developer website and research how and what information they provide.
|
|
|
|
|
|
Hi together,
i am currently working on a WPF-based software for some basic CAD-funtionality. I herefore used this example Direct2DonWPF for using DirectX / D2D in a WPF control, which is working fine. I can draw what i want and the performance is good - EXCEPT the flickering when i draw to fast (>25fps). With GDI+ i was able to activate Doublebuffering and everything was fine. Now i'd need to implement the SwapChain from DirectX to get rid of this problem. Unfortunately, i found many tutorials and examples dealing with this issue, but none of them implements the SwapChain and D2D in WPF. I tried hours on hours to get it working, but on some random point i can't get it working.
Could someone help me understanding how to implement the SwapChain in the previously linked example, using D3DImage control etc.?
Thanks for your help,
Max
|
|
|
|
|
I would like to model the 3d object, consisted of two parts (A1 and A2). The angle between A1 and A2 is 150°. A1 and A2 are cylinders. The 3D object (A1 and A2) is lying on the XY plane.
I would like to rotate the 3d object around the A1 axis. The render plane has to be XZ. I don’t know how to rotate the 3d object around A1 axis. I use different combinations of the following functions:
D3DXMatrixRotationY(&R11, D3DX_PI * D3DXToRadian(0));
D3DXMatrixRotationAxis(&A1, &D3DXVECTOR3 (0.0f, 1.0f, 0.0f), D3DXToRadian(30));
D3DXMatrixRotationZ(&R12, D3DXToRadian(20));
D3DXMatrixTranslation(&T11, 20.0f, 5.0f, 0.0f);
D3DXMatrixRotationX(&R13, D3DXToRadian(40));
HR(gd3dDevice->SetTransform(D3DTS_WORLD, &(R11 * A1 * R13)));
Unfortunately I don’t understand how to create a child object (A2). When I rotate the object A1, the child object A2 has to be rotated automatically around the A1 axis.
Could you help to resolve the problem?
Regards
|
|
|
|
|
I would like to control two 3D objects separately using direct3d library.
I drawn one cube and one cylinder. Unfortunately when I move the cube by the keyboard, I move the cylinder too. How to move only one of the 3d objects?
Regards
My code is here:
#include <windows.h>
#include "CubeDemo.h"
#include "d3dApp.h"
#include "GfxStats.h"
#include "Vertex.h"
#include "DirectInput.h"
#include <crtdbg.h>
#include <list>
#include "d3dUtil.h"
CubeDemo::CubeDemo(HINSTANCE hInstance, std::string winCaption,
D3DDEVTYPE devType, DWORD requestedVP)
: D3DApp(hInstance, winCaption, devType, requestedVP), MAX_SPEED(1500.0f), ACCELE(1000.0f)
{
mGfxStats = new GfxStats();
mCameraRadius = 10.0f;
mCameraRotationY = 1.2 * D3DX_PI;
xPos = 0.0f;
yPos = 0.0f;
mCameraHeight = 1.0f;
HR(D3DXCreateCylinder(gd3dDevice, 4.0f, 5.0f, 6.0f, 10, 4, &mCylinder, 0));
int numCylVerts = mCylinder->GetNumVertices() * 14;
int numCylTris = mCylinder->GetNumFaces() * 14;
mGfxStats->addVertices(mNumGridVertices);
mGfxStats->addVertices(numCylVerts);
mGfxStats->addTriangles(mNumGridTriangles);
mGfxStats->addTriangles(numCylTris);
buildGeoBuffers();
buildFX();
buildVertexBuffer();
buildIndexBuffer();
onResetDevice();
InitAllVertexDeclarations();
checkDeviceCaps();
}
void CubeDemo::buildFX()
{
ID3DXBuffer* errors = 0;
HR(D3DXCreateEffectFromFile(gd3dDevice, "transform.fx",
0, 0, D3DXSHADER_DEBUG, 0, &mFX, &errors));
if( errors )
MessageBox(0, (char*)errors->GetBufferPointer(), 0, 0);
mhTech = mFX->GetTechniqueByName("TransformTech");
mhWVP = mFX->GetParameterByName(0, "gWVP");
}
bool CubeDemo::checkDeviceCaps()
{
D3DCAPS9 caps;
HR(gd3dDevice->GetDeviceCaps(&caps));
if (caps.VertexShaderVersion < D3DVS_VERSION(3, 0))
return false;
if (caps.VertexShaderVersion < D3DPS_VERSION(3, 0))
return true;
}
void CubeDemo::drawCylinders()
{
D3DXMATRIX T, R;
D3DXMatrixRotationX(&R, D3DX_PI * 0.5f);
int z = 30;
D3DXVECTOR3 pos(100, 0, 0);
D3DXVECTOR3 target(0.0f, 0.0f, 0.0f);
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXMatrixLookAtLH(&mView, &pos, &target, &up);
D3DXMatrixTranslation(&T, -100.0f, 3.0f, (float)z);
HR(mFX->SetMatrix(mhWVP, &(R * T * mView * mProj)));
HR(mFX->CommitChanges());
HR(mCylinder->DrawSubset(0));
}
void CubeDemo::buildVertexBuffer()
{
HR(gd3dDevice->CreateVertexBuffer(8 * sizeof (VertexPos), D3DUSAGE_WRITEONLY, 0, D3DPOOL_MANAGED, &mVB, 0));
VertexPos* v = 0;
HR(mVB->Lock(0, 0, (void**)&v, 0));
v[0] = VertexPos(-1.0f, -1.0f, -1.0f);
v[1] = VertexPos(-1.0f, 1.0f, -1.0f);
v[2] = VertexPos( 1.0f, 1.0f, -1.0f);
v[3] = VertexPos( 1.0f, -1.0f, -1.0f);
v[4] = VertexPos(-1.0f, -1.0f, 1.0f);
v[5] = VertexPos(-1.0f, 1.0f, 1.0f);
v[6] = VertexPos( 1.0f, 1.0f, 1.0f);
v[7] = VertexPos( 1.0f, -1.0f, 1.0f);
HR(mVB->Unlock());
}
void CubeDemo::buildIndexBuffer()
{
HR(gd3dDevice->CreateIndexBuffer(36 * sizeof(WORD), D3DUSAGE_WRITEONLY, D3DFMT_INDEX16, D3DPOOL_MANAGED, &mIB, 0));
WORD* k = 0;
HR(mIB->Lock(0, 0, (void**)&k, 0));
k[0] = 0; k[1] = 1; k[2] = 2;
k[3] = 0; k[4] = 2; k[5] = 3;
k[6] = 4; k[7] = 6; k[8] = 5;
k[9] = 4; k[10] = 7; k[11] = 6;
k[12] = 4; k[13] = 5; k[14] = 1;
k[15] = 4; k[16] = 1; k[17] = 0;
k[18] = 3; k[19] = 2; k[20] = 6;
k[21] = 3; k[22] = 6; k[23] = 7;
k[24] = 1; k[25] = 5; k[26] = 6;
k[27] = 1; k[28] = 6; k[29] = 2;
k[30] = 4; k[31] = 0; k[32] = 3;
k[33] = 4; k[34] = 3; k[35] = 7;
HR(mIB->Unlock());
}
CubeDemo::~CubeDemo()
{
delete mGfxStats;
ReleaseCOM(mVB);
ReleaseCOM(mIB);
ReleaseCOM(mCylinder);
DestroyAllVertexDeclarations();
}
void CubeDemo::onLostDevice()
{
mGfxStats->onLostDevice();
}
void CubeDemo::onResetDevice()
{
mGfxStats->onResetDevice();
buildProjMtx();
}
void CubeDemo::buildProjMtx()
{
float w = (float)md3dPP.BackBufferWidth;
float h = (float)md3dPP.BackBufferHeight;
D3DXMatrixPerspectiveFovLH(&mProj, D3DX_PI * 0.25f, w/h,
1.0f, 5000.0f);
}
void CubeDemo::updateScene(float dt)
{
mGfxStats->setVertexCount(8);
mGfxStats->setTriCount(12);
mGfxStats->update(dt);
gDInput->poll();
if( gDInput->keyDown(DIK_W) )
xPos = xPos + 0.001f;
if( gDInput->keyDown(DIK_S) )
xPos = xPos - 0.001f;
if( gDInput->keyDown(DIK_A) )
yPos = yPos - 0.001f;
if( gDInput->keyDown(DIK_D) )
yPos = yPos + 0.001f;
mCameraRotationY += gDInput->mouseDX() / 50.0f;
mCameraRadius += gDInput->mouseDY() / 50.0f;
if( fabsf(mCameraRotationY) >= 2.0f * D3DX_PI )
mCameraRotationY = 0.0f;
if( mCameraRadius < 5.0f )
mCameraRadius = 5.0f;
buildViewMtx();
}
void CubeDemo::buildViewMtx()
{
D3DXVECTOR3 pos(0, 0, 100);
D3DXVECTOR3 target(0.0f, 0.0f, 0.0f);
D3DXVECTOR3 up(1.0f, 0.0f, 0.0f);
D3DXMatrixLookAtLH(&mView, &pos, &target, &up);
}
void CubeDemo::drawScene()
{
HR(gd3dDevice->Clear(0, 0,
D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, 0xffffffff, 1.0f, 0));
HR(gd3dDevice->BeginScene());
HR(gd3dDevice->SetStreamSource(0, mVB, 0, sizeof(VertexPos)));
HR(gd3dDevice->SetIndices(mIB));
HR(gd3dDevice->SetVertexDeclaration(VertexPos::Decl));
HR(mFX->SetTechnique(mhTech));
D3DXMATRIX W;
D3DXMatrixIdentity(&W);
HR(gd3dDevice->SetTransform(D3DTS_WORLD, &W));
HR(gd3dDevice->SetTransform(D3DTS_VIEW, &mView));
HR(gd3dDevice->SetTransform(D3DTS_PROJECTION, &mProj));
D3DXMATRIX matTranslate;
D3DXMatrixTranslation(&matTranslate, xPos, yPos, 0.0f);
HR(gd3dDevice->SetTransform(D3DTS_WORLD, &matTranslate));
HR(gd3dDevice->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME));
HR(gd3dDevice->DrawIndexedPrimitive(
D3DPT_TRIANGLELIST, 0, 0, 8, 0, 12));
drawCylinders();
HR(gd3dDevice->EndScene());
HR(gd3dDevice->Present(0, 0, 0, 0));
}
void CubeDemo::buildGeoBuffers()
{
std::vector<D3DXVECTOR3> verts;
std::vector<DWORD> indices;
GenTriGrid(100, 100, 1.0f, 1.0f, D3DXVECTOR3(0.0f, 0.0f, 0.0f), verts, indices);
mNumGridVertices = 100*100;
mNumGridTriangles = 99*99*2;
HR(gd3dDevice->CreateVertexBuffer(mNumGridVertices * sizeof(VertexPos),
D3DUSAGE_WRITEONLY, 0, D3DPOOL_MANAGED, &mVB, 0));
VertexPos* v = 0;
HR(mVB->Lock(0, 0, (void**)&v, 0));
for(DWORD i = 0; i < mNumGridVertices; ++i)
v[i] = verts[i];
HR(mVB->Unlock());
HR(gd3dDevice->CreateIndexBuffer(mNumGridTriangles*3*sizeof(WORD), D3DUSAGE_WRITEONLY,
D3DFMT_INDEX16, D3DPOOL_MANAGED, &mIB, 0));
WORD* k = 0;
HR(mIB->Lock(0, 0, (void**)&k, 0));
for(DWORD i = 0; i < mNumGridTriangles*3; ++i)
k[i] = (WORD)indices[i];
HR(mIB->Unlock());
}
modified 15-Jul-16 4:34am.
|
|
|
|
|
You're setting the world transform, then drawing the cube followed by the cylinder.
Either draw the cylinder before applying the world transform for the cube, or apply a new world transform before drawing the cylinder.
|
|
|
|
|
Thank you very much for your help.
|
|
|
|
|
|
|