|
I am currently working on a project for a class, and what I need to do is make a .txt file with a letter on the first line and two two-digit numbers on the second line, and have it write to another text file what the ASCII code for the letter and add and multiply the two numbers.
I've been able to get the code to compile, but it prints out strange numbers instead of what the answers should be. I've been told to initialize my variables, but when I set them to 0, thats all the program reads. So I get a file like, the ASCII is 0, and 0 + 0 = 0, and the like.
Here is the code. Please tell me where I am messing up!
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
int main()
{
ifstream inFile;
ofstream outFile;
char letter1;
double num1;
double num2;
inFile.open("input.txt");
outFile.open("answers.txt");
inFile >> letter1 >> num1 >> num2;
outFile << "The ASCII value of your character: " << letter1<< " is " << static_cast<int>(letter1) <
|
|
|
|
|
why are num1 and num2 declared as double ? I would try 'int' (looking at your input file example)..
int num1 = 0;
int num2 = 0;
what is that static_cast doing ? try
<< (int)letter1 << endl;
ok, thats old school these days, but Im an old f@rt, so it doesnt matter ... in the next two lines, bracket the operations so you get
<< (num1 + num2) << endl;
<< (num1 * num2) << endl;
the 'system(pause)' isnt doing much - you can step through your prog using F10 for example (if you're using MSVC) and look at the variable contents - or, sprinkle some 'cout' statments around to display things on the console/stdout instead of just to the file
ie
cout << "The Value of num1 is : " << num1 << endl;
|
|
|
|
|
I have just tried this as it stands changing only the static_cast(letter1) to (int)letter1 , and the results are correct. Looking at your results I can only assume that the program cannot find your input file, but as you do no error checking there is no notification of that condition. Check the return value of the open call to see that it is not NULL.
I would also agree with Garth's suggestion that you use int rather than double for your numeric values.
txtspeak is the realm of 9 year old children, not developers. Christian Graus
|
|
|
|
|
Thanks a bunch for the help guys! For some reason my class is telling me that static_cast<int>(letter1) works fine, but I can't get it to work at all. (int)letter1 worked amazingly well. The system("PAUSE"); was only in there due to my debugging, so that got removed before the final submitting. I did switch from double to int, but was using double before because, well, I still haven't fully fleshed out the differences in int, double, and float. I found another problem I had was I put the input.txt file in as a source file, not a resource file, so that probably had a chunk to do with it also. Anyways. They are hitting us hard with this term in programming, and since it's all done online, I really didn't have anyone to turn to. Thanks again for bailing me out!
|
|
|
|
|
Glad you figured it out. Just a couple of things extra:
1. See here[^] for documentation on static_cast .
2. The different numeric types are:
2.1 int for holding values in integer format (i.e. without any fractional part) e.g. 1, 3, 212, 134788 etc.
2.2 float and double for holding values with a fractional (or decimal) part e.g. 3.14, 0.000497 etc.
Generally speaking it is best to use integer types unless you must deal with decimals, as they lose precision owing to the manner in which they are stored in computer words.
txtspeak is the realm of 9 year old children, not developers. Christian Graus
|
|
|
|
|
Hello,
I'm trying to build a project but getting errors 2001 and 2019. Here is an error data in normal and verbose mode.
1>MBackProp.obj : error LNK2019: unresolved external symbol "public: __thiscall CudaMultipleBackPropagation::~CudaMultipleBackPropagation(void)" (??1CudaMultipleBackPropagation@@QAE@XZ) referenced in function "public: void * __thiscall CudaMultipleBackPropagation::`scalar deleting destructor'(unsigned int)" (??_GCudaMultipleBackPropagation@@QAEPAXI@Z)
1>MBackPropDlg.obj : error LNK2001: unresolved external symbol "public: __thiscall CudaMultipleBackPropagation::~CudaMultipleBackPropagation(void)" (??1CudaMultipleBackPropagation@@QAE@XZ)
1>MBPTopologyCtrl.obj : error LNK2001: unresolved external symbol "public: __thiscall CudaMultipleBackPropagation::~CudaMultipleBackPropagation(void)" (??1CudaMultipleBackPropagation@@QAE@XZ)
1>MBackPropDlg.obj : error LNK2019: unresolved external symbol "public: __thiscall Cuda::Cuda(void)" (??0Cuda@@QAE@XZ) referenced in function "public: __thiscall CMBackPropDlg::CMBackPropDlg(void)" (??0CMBackPropDlg@@QAE@XZ)
1>MBackPropDlg.obj : error LNK2019: unresolved external symbol "public: __thiscall CudaMultipleBackPropagation::CudaMultipleBackPropagation(class Pointer<class MultipleBackPropagation> &,class Matrix<double> &,class Matrix<double> &)" (??0CudaMultipleBackPropagation@@QAE@AAV?$Pointer@VMultipleBackPropagation@@@@AAV?$Matrix@N@@1@Z) referenced in function "private: static unsigned int __cdecl CMBackPropDlg::TrainNetwork(void *)" (?TrainNetwork@CMBackPropDlg@@CAIPAX@Z)
1>MBackPropDlg.obj : error LNK2019: unresolved external symbol "public: void __thiscall CudaMultipleBackPropagation::CopyNetworkHost(class Pointer<class MultipleBackPropagation> &)" (?CopyNetworkHost@CudaMultipleBackPropagation@@QAEXAAV?$Pointer@VMultipleBackPropagation@@@@@Z) referenced in function "protected: void __thiscall CMBackPropDlg::TrainOneEpochUsingCuda(void)" (?TrainOneEpochUsingCuda@CMBackPropDlg@@IAEXXZ)
1>MBackPropDlg.obj : error LNK2019: unresolved external symbol "public: void __thiscall CudaMultipleBackPropagation::Train(double,double,bool,double,double)" (?Train@CudaMultipleBackPropagation@@QAEXNN_NNN@Z) referenced in function "protected: void __thiscall CMBackPropDlg::TrainOneEpochUsingCuda(void)" (?TrainOneEpochUsingCuda@CMBackPropDlg@@IAEXXZ)
1>C:\Documents and Settings\Administrator\MBP_clone\MBackProp\Debug\MBackProp.exe : fatal error LNK1120: 5 unresolved externals
Searching C:\Program Files\Microsoft SDKs\Windows\v6.0A\\lib\kernel32.lib:
Found __imp__GetLastError@0
Referenced in BackPropagation.obj
Referenced in MBackPropDlg.obj
Referenced in MultipleBackPropagation.obj
Referenced in VariablesData.obj
Loaded kernel32.lib(KERNEL32.dll)
Searching C:\Program Files\Microsoft SDKs\Windows\v6.0A\\lib\gdiplus.lib:
Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\comsuppwd.lib:
Found "void __stdcall _com_issue_error(long)" (?_com_issue_error@@YGXJ@Z)
Referenced in MBackPropDlg.obj
Loaded comsuppwd.lib(comsupp.obj)
Found "void __stdcall _com_raise_error(long,struct IErrorInfo *)" (?_com_raise_error@@YGXJPAUIErrorInfo@@@Z)
Referenced in comsuppwd.lib(comsupp.obj)
Loaded comsuppwd.lib(comraise.obj)
Found "long __cdecl _com_invoke_helper(struct IDispatch *,long,unsigned short,unsigned short,void *,wchar_t const *,char *,struct IErrorInfo * *)" (?_com_invoke_helper@@YAJPAUIDispatch@@JGGPAXPB_WPADPAPAUIErrorInfo@@@Z)
Referenced in comsuppwd.lib(comsupp.obj)
Loaded comsuppwd.lib(invkprxy.obj)
Found "long __stdcall _com_handle_excepinfo(struct tagEXCEPINFO &,struct IErrorInfo * *)" (?_com_handle_excepinfo@@YGJAAUtagEXCEPINFO@@PAPAUIErrorInfo@@@Z)
Referenced in comsuppwd.lib(invkprxy.obj)
Loaded comsuppwd.lib(invkerr.obj)
Can anybody direct me how to proceed wit this. I set already lib path
C:\Program Files\Microsoft SDKs\Windows\v6.0A\Include
C:\Program Files\Microsoft SDKs\Windows\v6.0A\Lib
But it dont help
Krzysztof
|
|
|
|
|
I hope CudaMultipleBackPropagation is from a another library?
If so did u included that lib correctly without any mismatach from the header you included for that?
Величие не Бога может быть недооценена.
modified on Sunday, March 28, 2010 1:17 AM
|
|
|
|
|
Do you have a path set to the Cuda library?
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Hi Adam, Tim
Yes paths to the CUCDA library are set OK, there is a path to header and object file.
Krzysztof
|
|
|
|
|
The external reference CudaMultipleBackPropagation::~CudaMultipleBackPropagation(void) and others, cannot be found by the linker. This indicates that the object or library containing these calls is not being found. The reason could be either that the path to the library has not been set in your project, or the name of the library or object file has not been added to the linker parameters. Check your project parameters again to be sure they are correct. Incidentally the path C:\Program Files\Microsoft SDKs\Windows\v6.0A\Include should be in your compiler include list not in your linker lib list.
txtspeak is the realm of 9 year old children, not developers. Christian Graus
|
|
|
|
|
I checked the directories. My object file CudaMultipleBackPropagation is located in directory
C:\Documents and Settings\Administrator\MBP_clone\MBackProp\MBPCuda\Debug
Under Linker/General/Additional Library directories i have
C:\Documents and Settings\Administrator\MBP_clone\MBackProp\MBPCuda\Debug
C:\Program Files\Microsoft SDKs\Windows\v6.0A\Lib
so directories are ok. I also tried to copy object file to main project directory but it didnt help. Renaming Debug directory
to Debug1 also dont change anything, always the same linking errors
C:\Program Files\Microsoft SDKs\Windows\v6.0A\Include i put to include directories
|
|
|
|
|
Did you add CudaMultipleBackPropagation.obj as an input file in your Linker options?
txtspeak is the realm of 9 year old children, not developers. Christian Graus
|
|
|
|
|
i did now and got this.
1>Linking...
1>BackPropagation.obj : warning LNK4075: ignoring '/EDITANDCONTINUE' due to '/INCREMENTAL:NO' specification
1>nafxcwd.lib(afxmem.obj) : error LNK2005: "void * __cdecl operator new[](unsigned int)" (??_U@YAPAXI@Z) already defined in libcpmtd.lib(newaop.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: __wassert already defined in libcmtd.lib(wassert.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _free already defined in libcmtd.lib(dbgfree.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _calloc already defined in libcmtd.lib(dbgcalloc.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: __recalloc already defined in libcmtd.lib(dbgheap.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: __CrtDbgReportW already defined in libcmtd.lib(dbgrptw.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _clock already defined in libcmtd.lib(clock.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _atoi already defined in libcmtd.lib(atox.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _atol already defined in libcmtd.lib(atox.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _memcpy_s already defined in libcmtd.lib(memcpy_s.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _isspace already defined in libcmtd.lib(_ctype.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _strchr already defined in libcmtd.lib(strchr.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: __invalid_parameter already defined in libcmtd.lib(invarg.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: "public: virtual __thiscall std::exception::~exception(void)" (??1exception@std@@UAE@XZ) already defined in libcmtd.lib(stdexcpt.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: "public: __thiscall std::exception::exception(void)" (??0exception@std@@QAE@XZ) already defined in libcmtd.lib(stdexcpt.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: "public: __thiscall std::exception::exception(class std::exception const &)" (??0exception@std@@QAE@ABV01@@Z) already defined in libcmtd.lib(stdexcpt.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _exit already defined in libcmtd.lib(crt0dat.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _printf already defined in libcmtd.lib(printf.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: "public: __thiscall std::exception::exception(char const * const &)" (??0exception@std@@QAE@ABQBD@Z) already defined in libcmtd.lib(stdexcpt.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: "public: __thiscall std::bad_cast::bad_cast(char const *)" (??0bad_cast@std@@QAE@PBD@Z) already defined in libcmtd.lib(stdexcpt.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _realloc already defined in libcmtd.lib(dbgrealloc.obj)
1>msvcrtd.lib(MSVCR90D.dll) : error LNK2005: _memmove_s already defined in libcmtd.lib(memmove_s.obj)
1>msvcrtd.lib(ti_inst.obj) : error LNK2005: "private: __thiscall type_info::type_info(class type_info const &)" (??0type_info@@AAE@ABV0@@Z) already defined in libcmtd.lib(typinfo.obj)
1>msvcrtd.lib(ti_inst.obj) : error LNK2005: "private: class type_info & __thiscall type_info::operator=(class type_info const &)" (??4type_info@@AAEAAV0@ABV0@@Z) already defined in libcmtd.lib(typinfo.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: __thiscall std::locale::~locale(void)" (??1locale@std@@QAE@XZ) already defined in libcpmtd.lib(locale0.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "void __cdecl std::_Debug_message(wchar_t const *,wchar_t const *,unsigned int)" (?_Debug_message@std@@YAXPB_W0I@Z) already defined in libcpmtd.lib(stdthrow.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: __thiscall std::_Lockit::~_Lockit(void)" (??1_Lockit@std@@QAE@XZ) already defined in libcpmtd.lib(xlock.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: __thiscall std::_Lockit::_Lockit(int)" (??0_Lockit@std@@QAE@H@Z) already defined in libcpmtd.lib(xlock.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: void __thiscall std::_Container_base_secure::_Orphan_all(void)const " (?_Orphan_all@_Container_base_secure@std@@QBEXXZ) already defined in libcpmtd.lib(locale0.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: __thiscall std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >::~basic_string<char,struct std::char_traits<char>,class std::allocator<char> >(void)" (??1?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@QAE@XZ) already defined in libcpmtd.lib(string.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: __thiscall std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >(char const *)" (??0?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@QAE@PBD@Z) already defined in libcpmtd.lib(string.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: __thiscall std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)" (??0?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@QAE@ABV01@@Z) already defined in libcpmtd.lib(string.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: char const * __thiscall std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >::c_str(void)const " (?c_str@?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@QBEPBDXZ) already defined in libcpmtd.lib(locale0.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: void __thiscall std::locale::facet::_Incref(void)" (?_Incref@facet@locale@std@@QAEXXZ) already defined in libcpmtd.lib(locale0.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: __thiscall std::_Container_base_secure::~_Container_base_secure(void)" (??1_Container_base_secure@std@@QAE@XZ) already defined in libcpmtd.lib(locale0.obj)
1>msvcprtd.lib(MSVCP90D.dll) : error LNK2005: "public: __thiscall std::_Container_base_secure::_Container_base_secure(void)" (??0_Container_base_secure@std@@QAE@XZ) already defined in libcpmtd.lib(locale0.obj)
1>LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library
1>LINK : warning LNK4098: defaultlib 'mfc90ud.lib' conflicts with use of other libs; use /NODEFAULTLIB:library
1>LINK : warning LNK4098: defaultlib 'mfcs90ud.lib' conflicts with use of other libs; use /NODEFAULTLIB:library
1>LINK : warning LNK4098: defaultlib 'msvcrtd.lib' conflicts with use of other libs; use /NODEFAULTLIB:library
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl FireLayer__entry(float *,float *,float *,int,int,float *)" (?FireLayer__entry@@YAXPAM00HH0@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl KernelFireLayer(int,struct dim3 &,int,float *,float *,float *,int,int,float *,int)" (?KernelFireLayer@@YAXHAAUdim3@@HPAM11HH1H@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl FireOutputLayer__entry(float *,float *,float *,int,int,float *,float *,float *,float *,float *)" (?FireOutputLayer__entry@@YAXPAM00HH00000@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaConfigureCall@32 referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl KernelFireOutputLayer(int,struct dim3 &,int,float *,float *,float *,int,int,float *,float *,float *,float *,float *,int)" (?KernelFireOutputLayer@@YAXHAAUdim3@@HPAM11HH11111H@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl CalculateLocalGradient__entry(float *,float *,float,float *,float *,float *,int,int,float *,float *,float *)" (?CalculateLocalGradient__entry@@YAXPAM0M000HH000@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::CalculateLocalGradient(int,float *,float *,float,class CudaMultipleBackPropagation::DeviceLayer *)" (?CalculateLocalGradient@DeviceLayer@CudaMultipleBackPropagation@@QAEXHPAM0MPAV12@@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl KernelCorrectLayerWeights(int,struct dim3 &,int,float *,float *,float,float *,float *,float *,float *,float *,float *,float,float,float,float,int)" (?KernelCorrectLayerWeights@@YAXHAAUdim3@@HPAM1M111111MMMMH@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::CorrectWeights(int,int,float *,float *,float,float,float)" (?CorrectWeights@DeviceLayer@CudaMultipleBackPropagation@@QAEXHHPAM0MMM@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaStreamCreate@4 referenced in function "public: __thiscall CudaMultipleBackPropagation::CudaMultipleBackPropagation(class Pointer<class MultipleBackPropagation> &,class Matrix<double> &,class Matrix<double> &)" (??0CudaMultipleBackPropagation@@QAE@AAV?$Pointer@VMultipleBackPropagation@@@@AAV?$Matrix@N@@1@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaMallocHost@8 referenced in function "public: __thiscall CudaMultipleBackPropagation::CudaMultipleBackPropagation(class Pointer<class MultipleBackPropagation> &,class Matrix<double> &,class Matrix<double> &)" (??0CudaMultipleBackPropagation@@QAE@AAV?$Pointer@VMultipleBackPropagation@@@@AAV?$Matrix@N@@1@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaFreeHost@4 referenced in function "public: __thiscall CudaMultipleBackPropagation::~CudaMultipleBackPropagation(void)" (??1CudaMultipleBackPropagation@@QAE@XZ)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaStreamDestroy@4 referenced in function "public: __thiscall CudaMultipleBackPropagation::~CudaMultipleBackPropagation(void)" (??1CudaMultipleBackPropagation@@QAE@XZ)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl RobustLearning__entry(float *,float *,float,int,int *,float * *,float * *,float * *,float,float * *,float * *)" (?RobustLearning__entry@@YAXPAM0MHPAHPAPAM22M22@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::Train(double,double,bool,double,double)" (?Train@CudaMultipleBackPropagation@@QAEXNN_NNN@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaMemcpyAsync@20 referenced in function "public: void __thiscall CudaMultipleBackPropagation::Train(double,double,bool,double,double)" (?Train@CudaMultipleBackPropagation@@QAEXNN_NNN@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaStreamQuery@4 referenced in function "public: void __thiscall CudaMultipleBackPropagation::Train(double,double,bool,double,double)" (?Train@CudaMultipleBackPropagation@@QAEXNN_NNN@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl KernelCalculateRMS(int,int,float *,float *,int,float)" (?KernelCalculateRMS@@YAXHHPAM0HM@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::Train(double,double,bool,double,double)" (?Train@CudaMultipleBackPropagation@@QAEXNN_NNN@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol ___cudaRegisterFatBinary@4 referenced in function ___sti____cudaRegisterAll_62_tmpxft_00001c24_00000000_6_CudaMultipleBackPropagation_cpp1_ii_a1e02e89
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol ___cudaUnregisterFatBinary@4 referenced in function ___cudaUnregisterBinaryUtil
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaMemcpy@16 referenced in function "public: __thiscall DeviceArray<float>::DeviceArray<float>(class HostArray<float> &)" (??0?$DeviceArray@M@@QAE@AAV?$HostArray@M@@@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaFree@4 referenced in function "public: __thiscall DeviceArray<float>::~DeviceArray<float>(void)" (??1?$DeviceArray@M@@QAE@XZ)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol _cudaMalloc@8 referenced in function "private: void __thiscall DeviceArray<float>::Alloc(int)" (?Alloc@?$DeviceArray@M@@AAEXH@Z)
1>MBackPropDlg.obj : error LNK2019: unresolved external symbol "public: __thiscall Cuda::Cuda(void)" (??0Cuda@@QAE@XZ) referenced in function "public: __thiscall CMBackPropDlg::CMBackPropDlg(void)" (??0CMBackPropDlg@@QAE@XZ)
1>C:\Documents and Settings\Administrator\MBP_clone\MBackProp\Debug\MBackProp.exe : fatal error LNK1120: 21 unresolved externals
1>Caching metadata information for c:\program files\microsoft visual studio 9.0\vc\atlmfc\lib\mfcmifc80.dll...
1>Caching metadata information for c:\documents and settings\administrator\mbp_clone\mbpgrid\bin\debug\mbpgrid.dll...
1>Build log was saved at "file://c:\Documents and Settings\Administrator\MBP_clone\MBackProp\Debug\BuildLog.htm"
1>MBackProp - 58 error(s), 5 warning(s)
========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==========
|
|
|
|
|
See this explanation[^] on linking with MFC.
txtspeak is the realm of 9 year old children, not developers. Christian Graus
|
|
|
|
|
after using 1st method to avoid wrong order linking and rebuilding solution im getting this
1>BackPropagation.obj : warning LNK4075: ignoring '/EDITANDCONTINUE' due to '/INCREMENTAL:NO' specification
1>LINK : warning LNK4098: defaultlib 'mfc90ud.lib' conflicts with use of other libs; use /NODEFAULTLIB:library
1>LINK : warning LNK4098: defaultlib 'mfcs90ud.lib' conflicts with use of other libs; use /NODEFAULTLIB:library
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl FireLayer__entry(float *,float *,float *,int,int,float *)" (?FireLayer__entry@@YAXPAM00HH0@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl KernelFireLayer(int,struct dim3 &,int,float *,float *,float *,int,int,float *,int)" (?KernelFireLayer@@YAXHAAUdim3@@HPAM11HH1H@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl FireOutputLayer__entry(float *,float *,float *,int,int,float *,float *,float *,float *,float *)" (?FireOutputLayer__entry@@YAXPAM00HH00000@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
1>C
obviously its different problem now with LNK2019 because CudaMultipleBackPropagation.obj is defined in linker inputs
|
|
|
|
|
You seem to be getting nowhere fast on this and it all revolves around CudaMultipleBackPropagation.obj as far as I can see. I don't know where this module comes from but if you have the source you may want to try rebuilding it. It may be that this object contains a #pragma comment that includes the library that is causing the conflict.
txtspeak is the realm of 9 year old children, not developers. Christian Graus
|
|
|
|
|
Rebuilding dont help and there is no pragma but i have a source
KERNEL FireLayer(CUDA_FLOATING_TYPE * inputs, CUDA_FLOATING_TYPE * weights, CUDA_FLOATING_TYPE * m, int mOffset, int totalNeuronsWithSelectiveActivation, CUDA_FLOATING_TYPE * outputs);
KERNEL FireOutputLayer(CUDA_FLOATING_TYPE * inputs, CUDA_FLOATING_TYPE * weights, CUDA_FLOATING_TYPE * m, int mOffset, int totalNeuronsWithSelectiveActivation, CUDA_FLOATING_TYPE * desiredOutputs, CUDA_FLOATING_TYPE * outputs, CUDA_FLOATING_TYPE * localGradient, CUDA_FLOATING_TYPE * rms, CUDA_FLOATING_TYPE * localGradientSpaceNet);
void KernelFireLayer(cudaStream_t stream, dim3 & gridSize, int blockSize, CUDA_FLOATING_TYPE * inputs, CUDA_FLOATING_TYPE * weights, CUDA_FLOATING_TYPE * m, int mOffset, int totalNeuronsWithSelectiveActivation, CUDA_FLOATING_TYPE * outputs, int numInputs);
void KernelFireOutputLayer(cudaStream_t stream, dim3 & gridSize, int blockSize, CUDA_FLOATING_TYPE * inputs, CUDA_FLOATING_TYPE * weights, CUDA_FLOATING_TYPE * m, int mOffset, int totalNeuronsWithSelectiveActivation, CUDA_FLOATING_TYPE * desiredOutputs, CUDA_FLOATING_TYPE * outputs, CUDA_FLOATING_TYPE * localGradient, CUDA_FLOATING_TYPE * rms, CUDA_FLOATING_TYPE * localGradientSpaceNet, int numInputs);
and bug
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl FireLayer__entry(float *,float *,float *,int,int,float *)" (?FireLayer__entry@@YAXPAM00HH0@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
1>CudaMultipleBackPropagation.obj : error LNK2019: unresolved external symbol "void __cdecl KernelFireLayer(int,struct dim3 &,int,float *,float *,float *,int,int,float *,int)" (?KernelFireLayer@@YAXHAAUdim3@@HPAM11HH1H@Z) referenced in function "public: void __thiscall CudaMultipleBackPropagation::DeviceLayer::Fire(int)" (?Fire@DeviceLayer@CudaMultipleBackPropagation@@QAEXH@Z)
So it complains about this call: CudaMultipleBackPropagation::DeviceLayer::Fire(int)
it is defined like this
void Fire(cudaStream_t stream);
Than it is mapped to FireLayer. So does it mean that the arguments of CudaMultipleBackPropagation::DeviceLayer::Fire so cudaStream_t stream dont map
to arguments of FireLayer ?? Why compilation passes than ??? What does it mean __entry after FireLayer ??
Whole class
#include "../cuda.h"
#include "../MultipleBackPropagation.h"
#include "../../Common/CUDA/CudaDefinitions.h"
#include "../../Common/CUDA/Arrays/DeviceArray.h"
#include "../../Common/CUDA/Arrays/HostArray.h"
class CudaMultipleBackPropagation {
private:
class DeviceLayer {
friend class CudaMultipleBackPropagation;
private:
static int neuronsWithSelectiveActivation;
int patterns;
int neurons;
int inputs;
int inputsWithoutBias;
int connections;
DeviceArray<CUDA_FLOATING_TYPE> weights;
DeviceArray<CUDA_FLOATING_TYPE> bestWeights;
DeviceArray<CUDA_FLOATING_TYPE> learnRate;
DeviceArray<CUDA_FLOATING_TYPE> lastDelta;
DeviceArray<CUDA_FLOATING_TYPE> lastDeltaWithoutLearningMomentum;
DeviceArray<CUDA_FLOATING_TYPE> outputs;
DeviceArray<CUDA_FLOATING_TYPE> localGradient;
CUDA_FLOATING_TYPE * inputValues;
CUDA_FLOATING_TYPE * desOutputs;
CUDA_FLOATING_TYPE * m;
int mOffset;
CUDA_FLOATING_TYPE * lgSpaceNet;
CUDA_FLOATING_TYPE * rms;
dim3 dimNeuronsPatterns;
dim3 dimInputsNeurons;
dim3 dimOutputsNeurons;
int inputsBlockSize;
int sharedMemFire;
int sharedMemGradients;
bool isOutputLayer;
public:
DeviceLayer(HostArray<CUDA_FLOATING_TYPE> & hweights, HostArray<CUDA_FLOATING_TYPE> & hlearnRate, HostArray<CUDA_FLOATING_TYPE> & hlastDelta, HostArray<CUDA_FLOATING_TYPE> & hlastDeltaWithoutLearningMomentum, DeviceArray<CUDA_FLOATING_TYPE> * layerInputs, int inputs, int neurons, int nextLayerNeurons, int patterns, CUDA_FLOATING_TYPE * m, int mOffset, CUDA_FLOATING_TYPE * lgSpaceNet) : weights(hweights), learnRate(hlearnRate), lastDelta(hlastDelta), lastDeltaWithoutLearningMomentum(hlastDeltaWithoutLearningMomentum), outputs(neurons * patterns), localGradient(neurons * patterns), dimNeuronsPatterns(neurons, patterns), dimInputsNeurons(inputs, neurons), bestWeights(hweights.Lenght()), dimOutputsNeurons(nextLayerNeurons, neurons) {
connections = hweights.Lenght();
this->m = m;
this->mOffset = mOffset;
this->lgSpaceNet = lgSpaceNet;
this->inputs = inputs;
this->neurons = neurons;
this->patterns = patterns;
inputsWithoutBias = inputs - 1;
inputsBlockSize = 1;
while(inputsBlockSize < MAX_THREADS_PER_BLOCK && inputsBlockSize < inputs) inputsBlockSize <<= 1;
sharedMemFire = weights.Lenght() * sizeof(CUDA_FLOATING_TYPE);
sharedMemGradients = (nextLayerNeurons * (neurons + 1)) * sizeof(CUDA_FLOATING_TYPE);
inputValues = layerInputs->Pointer();
desOutputs = rms = NULL;
isOutputLayer = false;
}
void DefineOutputLayer(CudaMultipleBackPropagation * cmbp) {
isOutputLayer = true;
desOutputs = cmbp->d_desOutputs->Pointer();
rms = cmbp->d_rms->Pointer();
sharedMemFire += neurons * sizeof(CUDA_FLOATING_TYPE);
}
void Fire(cudaStream_t stream);
void CalculateLocalGradient(cudaStream_t stream, CUDA_FLOATING_TYPE * rms, CUDA_FLOATING_TYPE * bestRMS, CUDA_FLOATING_TYPE rmsGrowToApplyRobustLearning, DeviceLayer * nextLayer);
void CorrectWeights(cudaStream_t stream, int patternsBlockSize, CUDA_FLOATING_TYPE * rms, CUDA_FLOATING_TYPE * bestRMS, CUDA_FLOATING_TYPE rmsGrowToApplyRobustLearning, CUDA_FLOATING_TYPE robustFactor, CUDA_FLOATING_TYPE momentum);
};
List<DeviceLayer> layersSpaceNetwork;
List<DeviceLayer> layers;
Pointer< DeviceArray<CUDA_FLOATING_TYPE> > d_inputs;
Pointer< DeviceArray<CUDA_FLOATING_TYPE> > d_desOutputs;
Pointer< DeviceArray<CUDA_FLOATING_TYPE> > d_rms;
Pointer< DeviceArray<CUDA_FLOATING_TYPE> > d_bestRMS;
DeviceArray<CUDA_FLOATING_TYPE> d_rmsOut;
CUDA_FLOATING_TYPE * rms;
Pointer< DeviceArray<int> > d_numberWeightsLayer;
Pointer< DeviceArray<CUDA_FLOATING_TYPE *> > d_weightsLayers;
Pointer< DeviceArray<CUDA_FLOATING_TYPE *> > d_bestWeightsLayers;
Pointer< DeviceArray<CUDA_FLOATING_TYPE *> > d_learnRatesLayers;
Pointer< DeviceArray<CUDA_FLOATING_TYPE *> > d_lastDeltaLayers;
Pointer< DeviceArray<CUDA_FLOATING_TYPE *> > d_lastDeltaWithoutLMlayers;
cudaStream_t streamKernels;
cudaStream_t streamRMS;
int layersRobustTraining;
int maxNumberWeigths;
int patternsBlockSize;
CUDA_FLOATING_TYPE numberPatternsNeurons;
void CreateDeviceLayers(List<Layer> & hostLayers, List<DeviceLayer> & deviceLayers, int patterns, int * neuronsWithSelectiveActivation);
void CopyLayersToHost(List<DeviceLayer> & deviceLayers, List<Layer> & hostLayers);
public:
CudaMultipleBackPropagation(Pointer <MultipleBackPropagation> & mbp, Matrix<double> & trainInputPatterns, Matrix<double> & trainDesiredOutputPatterns);
~CudaMultipleBackPropagation();
void Train(double momentum, double spaceMomentum, bool robustLearning, double rmsGrowToApplyRobustLearning, double robustFactor);
CUDA_FLOATING_TYPE GetRMS() {
return *rms;
}
void CopyNetworkHost(Pointer <MultipleBackPropagation> & mbp);
};
#endif
#include "CudaMultipleBackPropagation.h"
#include "MBPkernels.h"
int CudaMultipleBackPropagation::DeviceLayer::neuronsWithSelectiveActivation = 0;
void CudaMultipleBackPropagation::DeviceLayer::Fire(cudaStream_t stream) {
if (isOutputLayer) {
if(connections > MAX_THREADS_PER_BLOCK) {
KernelFireOutputLayer(stream, dimNeuronsPatterns, inputsBlockSize, inputValues, weights.Pointer(), m, mOffset, neuronsWithSelectiveActivation, desOutputs, outputs.Pointer(), localGradient.Pointer(), rms, lgSpaceNet, inputsWithoutBias);
} else {
FireOutputLayer<<<patterns, dimInputsNeurons, sharedMemFire, stream>>>(inputValues, weights.Pointer(), m, mOffset, neuronsWithSelectiveActivation, desOutputs, outputs.Pointer(), localGradient.Pointer(), rms, lgSpaceNet);
}
} else {
if(connections > MAX_THREADS_PER_BLOCK) {
KernelFireLayer(stream, dimNeuronsPatterns, inputsBlockSize, inputValues, weights.Pointer(), m, mOffset, neuronsWithSelectiveActivation, outputs.Pointer(), inputsWithoutBias);
} else {
FireLayer<<<patterns, dimInputsNeurons, sharedMemFire, stream>>>(inputValues, weights.Pointer(), m, mOffset, neuronsWithSelectiveActivation, outputs.Pointer());
}
}
}
void CudaMultipleBackPropagation::DeviceLayer::CalculateLocalGradient(cudaStream_t stream, CUDA_FLOATING_TYPE * rms, CUDA_FLOATING_TYPE * bestRMS, CUDA_FLOATING_TYPE rmsGrowToApplyRobustLearning, DeviceLayer * nextLayer) {
::CalculateLocalGradient<<<patterns, dimOutputsNeurons, sharedMemGradients, stream>>>(rms, bestRMS, rmsGrowToApplyRobustLearning, outputs.Pointer(), nextLayer->weights.Pointer(), m, mOffset, neuronsWithSelectiveActivation, nextLayer->localGradient.Pointer(), localGradient.Pointer(), lgSpaceNet);
}
void CudaMultipleBackPropagation::DeviceLayer::CorrectWeights(cudaStream_t stream, int patternsBlockSize, CUDA_FLOATING_TYPE * rms, CUDA_FLOATING_TYPE * bestRMS, CUDA_FLOATING_TYPE rmsGrowToApplyRobustLearning, CUDA_FLOATING_TYPE robustFactor, CUDA_FLOATING_TYPE momentum) {
KernelCorrectLayerWeights(stream, dimInputsNeurons, patternsBlockSize, rms, bestRMS, rmsGrowToApplyRobustLearning, inputValues, localGradient.Pointer(), weights.Pointer(), learnRate.Pointer(), lastDeltaWithoutLearningMomentum.Pointer(), lastDelta.Pointer(), (CUDA_FLOATING_TYPE) Connection::u, (CUDA_FLOATING_TYPE) Connection::d, robustFactor, momentum, patterns);
}
void CudaMultipleBackPropagation::CreateDeviceLayers(List<Layer> & hostLayers, List<DeviceLayer> & deviceLayers, int patterns, int * neuronsWithSelectiveActivation) {
Layer * l = hostLayers.First();
int inputsWithoutBias = l->neurons.Lenght();
DeviceArray<CUDA_FLOATING_TYPE> * layerInputs = d_inputs;
DeviceLayer * outputLayerSpaceNetwork = layersSpaceNetwork.Last();
CUDA_FLOATING_TYPE * m = (neuronsWithSelectiveActivation == NULL) ? NULL : outputLayerSpaceNetwork->outputs.Pointer();
CUDA_FLOATING_TYPE * lgSpaceNet = (neuronsWithSelectiveActivation == NULL) ? NULL : outputLayerSpaceNetwork->localGradient.Pointer();
int mOffset = 0;
Layer * nextLayer = hostLayers.Next();
for (int ln = 1; (l = nextLayer) != NULL; ln++) {
int neurons = l->neurons.Lenght();
int inputs = inputsWithoutBias + 1;
int connections = inputs * neurons;
if (connections > maxNumberWeigths) maxNumberWeigths = connections;
HostArray<CUDA_FLOATING_TYPE> weights(connections);
HostArray<CUDA_FLOATING_TYPE> learningRate(connections);
HostArray<CUDA_FLOATING_TYPE> lDelta(connections);
HostArray<CUDA_FLOATING_TYPE> lastDeltaWithoutLearningMomentum(connections);
int w = 0;
for(NeuronWithInputConnections * n = static_cast<NeuronWithInputConnections *> (l->neurons.First()); n != NULL; n = static_cast<NeuronWithInputConnections *> (l->neurons.Next())) {
for(Connection * c = n->inputs.First(); c != NULL; c = n->inputs.Next()) {
weights[w] = (CUDA_FLOATING_TYPE) c->weight;
learningRate[w] = (CUDA_FLOATING_TYPE) c->learningRate;
lDelta[w] = (CUDA_FLOATING_TYPE) c->delta;
lastDeltaWithoutLearningMomentum[w] = (CUDA_FLOATING_TYPE) c->lastDeltaWithoutLearningMomentum;
w++;
}
}
int numberNeuronsWithSelectiveActivation = (m == NULL) ? 0 : neuronsWithSelectiveActivation[ln];
CUDA_FLOATING_TYPE * ml = (numberNeuronsWithSelectiveActivation) ? m : NULL;
CUDA_FLOATING_TYPE * lgSpaceNetl = (numberNeuronsWithSelectiveActivation) ? lgSpaceNet : NULL;
nextLayer = hostLayers.Next();
int nextLayerNeurons = (nextLayer == NULL) ? 0 : nextLayer->neurons.Lenght();
DeviceLayer * dl = new DeviceLayer(weights, learningRate, lDelta, lastDeltaWithoutLearningMomentum, layerInputs, inputs, neurons, nextLayerNeurons, patterns, ml, mOffset, lgSpaceNetl);
deviceLayers.Add(dl);
mOffset += numberNeuronsWithSelectiveActivation;
layerInputs = &(dl->outputs);
inputsWithoutBias = neurons;
}
}
CudaMultipleBackPropagation::CudaMultipleBackPropagation(Pointer <MultipleBackPropagation> & mbp, Matrix<double> & trainInputPatterns, Matrix<double> & trainDesiredOutputPatterns) : d_rmsOut(1) {
int patterns = trainInputPatterns.Rows();
int ninputs = mbp->Inputs();
int noutputs = mbp->Outputs();
HostArray<CUDA_FLOATING_TYPE> inputs(ninputs * patterns);
HostArray<CUDA_FLOATING_TYPE> desiredOutputs(noutputs * patterns);
for(int p = 0; p < patterns; p++) {
for (int i = 0; i < ninputs; i++) inputs[p * ninputs + i] = (CUDA_FLOATING_TYPE) trainInputPatterns[p][i];
for (int o = 0; o < noutputs; o++) desiredOutputs[p * noutputs + o] = (CUDA_FLOATING_TYPE) trainDesiredOutputPatterns[p][o];
}
d_inputs = new DeviceArray<CUDA_FLOATING_TYPE>(inputs);
d_desOutputs = new DeviceArray<CUDA_FLOATING_TYPE>(desiredOutputs);
maxNumberWeigths = 0;
int * neuronsWithSelectiveActivation = NULL;
if (!mbp->spaceNetwork.IsNull()) {
CreateDeviceLayers(mbp->spaceNetwork->layers, layersSpaceNetwork, patterns, NULL);
neuronsWithSelectiveActivation = mbp->neuronsWithSelectiveActivation.Pointer();
DeviceLayer::neuronsWithSelectiveActivation = layersSpaceNetwork.Last()->neurons;
}
CreateDeviceLayers(mbp->layers, layers, patterns, neuronsWithSelectiveActivation);
DeviceLayer * dlOut = layers.Last();
layersRobustTraining = layersSpaceNetwork.Lenght() + layers.Lenght();
HostArray<int> numberWeightsLayer(layersRobustTraining);
HostArray<CUDA_FLOATING_TYPE *> weightsLayers(layersRobustTraining);
HostArray<CUDA_FLOATING_TYPE *> bestWeightsLayers(layersRobustTraining);
HostArray<CUDA_FLOATING_TYPE *> learnRatesLayers(layersRobustTraining);
HostArray<CUDA_FLOATING_TYPE *> lastDeltaLayers(layersRobustTraining);
HostArray<CUDA_FLOATING_TYPE *> lastDeltaWithoutLMlayers(layersRobustTraining);
int ll = 0;
for(DeviceLayer * l = layersSpaceNetwork.First(); l != NULL; l = layersSpaceNetwork.Next()) {
numberWeightsLayer[ll] = l->connections;
weightsLayers[ll] = l->weights.Pointer();
bestWeightsLayers[ll] = l->bestWeights.Pointer();
learnRatesLayers[ll] = l->learnRate.Pointer();
lastDeltaLayers[ll] = l->lastDelta.Pointer();
lastDeltaWithoutLMlayers[ll] = l->lastDeltaWithoutLearningMomentum.Pointer();
ll++;
}
for(DeviceLayer * l = layers.First(); l != NULL; l = layers.Next()) {
numberWeightsLayer[ll] = l->connections;
weightsLayers[ll] = l->weights.Pointer();
bestWeightsLayers[ll] = l->bestWeights.Pointer();
learnRatesLayers[ll] = l->learnRate.Pointer();
lastDeltaLayers[ll] = l->lastDelta.Pointer();
lastDeltaWithoutLMlayers[ll] = l->lastDeltaWithoutLearningMomentum.Pointer();
ll++;
}
d_numberWeightsLayer = new DeviceArray<int>(numberWeightsLayer);
d_weightsLayers = new DeviceArray<CUDA_FLOATING_TYPE *>(weightsLayers);
d_bestWeightsLayers = new DeviceArray<CUDA_FLOATING_TYPE *>(bestWeightsLayers);
d_learnRatesLayers = new DeviceArray<CUDA_FLOATING_TYPE *>(learnRatesLayers);
d_lastDeltaLayers = new DeviceArray<CUDA_FLOATING_TYPE *>(lastDeltaLayers);
d_lastDeltaWithoutLMlayers = new DeviceArray<CUDA_FLOATING_TYPE *>(lastDeltaWithoutLMlayers);
int sizeRMSvector = (dlOut->connections > MAX_THREADS_PER_BLOCK) ? patterns * dlOut->neurons : patterns;
d_rms = new DeviceArray<CUDA_FLOATING_TYPE>(sizeRMSvector);
dlOut->DefineOutputLayer(this);
HostArray<CUDA_FLOATING_TYPE> h_bestRMS(1);
h_bestRMS[0] = (patterns * CUDA_VALUE(3.0));
d_bestRMS = new DeviceArray<CUDA_FLOATING_TYPE>(h_bestRMS);
cudaMallocHost((void**) &rms, sizeof(CUDA_FLOATING_TYPE));
*rms = CUDA_VALUE(1.0);
patternsBlockSize = 1;
while(patternsBlockSize < MAX_THREADS_PER_BLOCK && patternsBlockSize < patterns) patternsBlockSize <<= 1;
numberPatternsNeurons = (CUDA_FLOATING_TYPE) patterns * (CUDA_FLOATING_TYPE) dlOut->neurons;
cudaStreamCreate(&streamKernels);
cudaStreamCreate(&streamRMS);
}
CudaMultipleBackPropagation::~CudaMultipleBackPropagation() {
cudaStreamDestroy(streamKernels);
cudaStreamDestroy(streamRMS);
*rms = CUDA_VALUE(1.0);
cudaFreeHost(rms);
}
void CudaMultipleBackPropagation::Train(double momentum, double spaceMomentum, bool robustLearning, double rmsGrowToApplyRobustLearning, double robustFactor) {
for(DeviceLayer * l = layersSpaceNetwork.First(); l != NULL; l = layersSpaceNetwork.Next()) l->Fire(streamKernels);
for(DeviceLayer * l = layers.First(); l != NULL; l = layers.Next()) l->Fire(streamKernels);
if (robustLearning) {
KernelCalculateRMS(streamKernels, patternsBlockSize, d_rms->Pointer(), d_rmsOut.Pointer(), d_rms->Lenght(), numberPatternsNeurons);
if (cudaStreamQuery(streamRMS) == cudaSuccess) cudaMemcpyAsync(rms, d_rmsOut.Pointer(), sizeof(CUDA_FLOATING_TYPE), cudaMemcpyDeviceToHost, streamRMS);
RobustLearning<<<1, maxNumberWeigths, 0, streamKernels>>>(d_rmsOut.Pointer(), d_bestRMS->Pointer(), (CUDA_FLOATING_TYPE) rmsGrowToApplyRobustLearning, layersRobustTraining, d_numberWeightsLayer->Pointer(), d_weightsLayers->Pointer(), d_bestWeightsLayers->Pointer(), d_learnRatesLayers->Pointer(), robustFactor, d_lastDeltaWithoutLMlayers->Pointer(), d_lastDeltaLayers->Pointer());
} else {
if (cudaStreamQuery(streamRMS) == cudaSuccess) {
KernelCalculateRMS(streamRMS, patternsBlockSize, d_rms->Pointer(), d_rmsOut.Pointer(), d_rms->Lenght(), numberPatternsNeurons);
cudaMemcpyAsync(rms, d_rmsOut.Pointer(), sizeof(CUDA_FLOATING_TYPE), cudaMemcpyDeviceToHost, streamRMS);
}
}
CUDA_FLOATING_TYPE * rms = (robustLearning) ? d_rmsOut.Pointer() : NULL;
CUDA_FLOATING_TYPE * bestRMS = (robustLearning) ? d_bestRMS->Pointer() : NULL;
DeviceLayer * nextLayer = layers.Last();
for(DeviceLayer * l = layers.Previous(); l != NULL; l = layers.Previous()) {
l->CalculateLocalGradient(streamKernels, rms, bestRMS, (CUDA_FLOATING_TYPE) rmsGrowToApplyRobustLearning, nextLayer);
nextLayer = l;
}
nextLayer = layersSpaceNetwork.Last();
for(DeviceLayer * l = layersSpaceNetwork.Previous(); l != NULL; l = layersSpaceNetwork.Previous()) {
l->CalculateLocalGradient(streamKernels, rms, bestRMS, (CUDA_FLOATING_TYPE) rmsGrowToApplyRobustLearning, nextLayer);
nextLayer = l;
}
for(DeviceLayer * l = layers.Last(); l != NULL; l = layers.Previous()) l->CorrectWeights(streamKernels, patternsBlockSize, rms, bestRMS, rmsGrowToApplyRobustLearning, robustFactor, momentum);
for(DeviceLayer * l = layersSpaceNetwork.Last(); l != NULL; l = layersSpaceNetwork.Previous()) l->CorrectWeights(streamKernels, patternsBlockSize, rms, bestRMS, rmsGrowToApplyRobustLearning, robustFactor, spaceMomentum);
}
void CudaMultipleBackPropagation::CopyLayersToHost(List<DeviceLayer> & deviceLayers, List<Layer> & hostLayers) {
hostLayers.First();
for(DeviceLayer * l = deviceLayers.First(); l != NULL; l = layers.Next()) {
Layer * hl = hostLayers.Next();
HostArray<CUDA_FLOATING_TYPE> dweights(l->weights);
HostArray<CUDA_FLOATING_TYPE> dlearnRate(l->learnRate);
HostArray<CUDA_FLOATING_TYPE> dlastDelta(l->lastDelta);
HostArray<CUDA_FLOATING_TYPE> dlastDeltaWithoutLearningMomentum(l->lastDeltaWithoutLearningMomentum);
int w = 0;
for(NeuronWithInputConnections * n = static_cast<NeuronWithInputConnections *> (hl->neurons.First()); n != NULL; n = static_cast<NeuronWithInputConnections *> (hl->neurons.Next())) {
for(Connection * c = n->inputs.First(); c != NULL; c = n->inputs.Next()) {
c->weight = dweights[w];
c->learningRate = dlearnRate[w];
c->delta = dlastDelta[w];
c->lastDeltaWithoutLearningMomentum = dlastDeltaWithoutLearningMomentum[w];
w++;
}
}
}
}
void CudaMultipleBackPropagation::CopyNetworkHost(Pointer <MultipleBackPropagation> & mbp) {
if (!mbp->spaceNetwork.IsNull()) CopyLayersToHost(layersSpaceNetwork, mbp->spaceNetwork->layers);
CopyLayersToHost(layers, mbp->layers);
|
|
|
|
|
Where is the implementation of KernelFireLayer() . Once again you are calling some function in your code that is not being included in your link process. I have no idea what part of this is code that you have written and what part comes from some external library, but that seems to be the issue you need to resolve.
txtspeak is the realm of 9 year old children, not developers. Christian Graus
|
|
|
|
|
There is no external library, i Just posted the header of the function.
Obviously the bug is here i think, for all those functions LNK2019 occurs. But i dont know what this __entry means.
So it looks that because :Fire(cudaStream_t stream) has just 2 parameters and inside functions much more and they are not seen
from inside of Fire function so LNK2016 occurs. Do you agree with me ??
this cudaStream_t stream can not find anywhere how it is defined
Its not my code BTW.
void CudaMultipleBackPropagation::DeviceLayer::Fire(cudaStream_t stream) {
if (isOutputLayer) {
if(connections > MAX_THREADS_PER_BLOCK) {
KernelFireOutputLayer(stream, dimNeuronsPatterns, inputsBlockSize, inputValues, weights.Pointer(), m, mOffset, neuronsWithSelectiveActivation, desOutputs, outputs.Pointer(), localGradient.Pointer(), rms, lgSpaceNet, inputsWithoutBias);
} else {
FireOutputLayer<<<patterns, dimInputsNeurons, sharedMemFire, stream>>>(inputValues, weights.Pointer(), m, mOffset, neuronsWithSelectiveActivation, desOutputs, outputs.Pointer(), localGradient.Pointer(), rms, lgSpaceNet);
}
} else {
if(connections > MAX_THREADS_PER_BLOCK) {
KernelFireLayer(stream, dimNeuronsPatterns, inputsBlockSize, inputValues, weights.Pointer(), m, mOffset, neuronsWithSelectiveActivation, outputs.Pointer(), inputsWithoutBias);
} else {
FireLayer<<<patterns, dimInputsNeurons, sharedMemFire, stream>>>(inputValues, weights.Pointer(), m, mOffset, neuronsWithSelectiveActivation, outputs.Pointer());
}
}
}
<pre>KERNEL FireLayer(CUDA_FLOATING_TYPE * inputs, CUDA_FLOATING_TYPE * weights, CUDA_FLOATING_TYPE * m, int mOffset, int totalNeuronsWithSelectiveActivation, CUDA_FLOATING_TYPE * outputs) {
extern __shared__ CUDA_FLOATING_TYPE iw[];
int connection = NEURON * NUM_INPUTS_INCLUDING_BIAS + INPUT;
SumInputWeight(connection, inputs, weights);
if (INPUT == 0) {
int n = PATTERN * NUM_NEURONS + NEURON;
CUDA_FLOATING_TYPE output = CUDA_SIGMOID(iw[THREAD_ID]);
if (m != NULL) output *= m[PATTERN * totalNeuronsWithSelectiveActivation + NEURON + mOffset];
outputs[n] = output;
}
}
</pre>
<div class="signature"><div class="modified">modified on Sunday, March 28, 2010 7:27 PM</div></div>
|
|
|
|
|
Krzysiaczek99 wrote: :Fire(cudaStream_t stream) has just 2 parameters
That's one parameter.
Krzysiaczek99 wrote: But i dont know what this __entry means.
Just a way the compiler has of generating entry point names.
It certainly looks like there are some mismatches between function calls and definitions. Since this is not your code your first port of call should have been the person whose code it is rather than posting it here. I suggest you try that route now.
txtspeak is the realm of 9 year old children, not developers. Christian Graus
|
|
|
|
|
Yes i will speak to him. I think he forgot to give (cudaStream_t stream) file with this class definition, I already found this file in another release of his code
|
|
|
|
|
Are you trying to use Nvidia's CUDA or are you trying to link in a lib file that you compile yourself? If the former, is this your first project using CUDA? I think there are several examples you can download.
|
|
|
|
|
No its not my fist CUDA project but not my code. Anyway the reason of those errors is missing library with one class.
|
|
|
|
|
Hi all
I have a VB project that I wish to port to VC. The VB project makes use of a dll file, for which I do not have a .lib or .h file. is there any way I can do what I am trying to do?
Sorry if this is a dumn question!
Cheers
Mike
|
|
|
|
|
Assuming you have the VB source code, you will just need to convert the dll function declarations to C++ declarations.
|
|
|
|
|