|
The company i work for is looking for changing the architecture we work with. Now we have one main application that does everything. We would like to change that to small applications to do smaller parts of work. These applications are in different languages (c#, java, erlang, progress, ...) We are also looking at IoT and we have some older applications written in basic. What i would like to know is if anyone has any experience in esb's so al of these applications can communicate with each other. Our main language of programming of our own services and applications are written in c#.
|
|
|
|
|
I see a lot of buzz words and no actual business reasons.
Sounds like a developer read a single web article and decided that re-architecting the entire enterprise was the way to go.
Tom Wauters wrote: What i would like to know is if anyone has any experience in esb's
Certainly one would hope that the internet bloggers who push it as well as selling their own expertise as consultants doing exactly that would have actually done that. But your mileage my differ if you look to one of them (of course their bank account would appreciate it.)
But other than that is there a question?
|
|
|
|
|
We are a company that does lots of different things. We have transporting, warehousing, some of the goods need to be repackaged and given different itemnumbers, we also do customclearance for our customers etc. There arent many full software packages out there that do it al. We see that at the moment with the software we are using right now. This is not a decission that is made on the spot. I am trying to get an overview of what the possibilities are, before doing anything. Not even sure if it is going to be several packages. But thx for the reply.
|
|
|
|
|
So then you start with a high level requirements view and architecture which must delve into some of the specific needs of disparate systems specifically addressing cross system usage.
After doing that then you might look into broad technologies to see which bests supports the model.
So design first followed by technology.
That of course is unfortunately mostly just rhetoric because large systems are complex. And that complexity means that solutions will never be clean. The design will not be clean and the implementation will not be clean.
Additionally the complexity will also almost invariably fail to actually meet the needs of the business due to
1. The complexity itself.
2. Failure to actually capture real business requirements
3. Need to actually reduce costs and/or increase profits.
One need only look how how large companies managed to successfully fulfill their IT needs over time, which is by piecemeal improvement.
So perhaps much better to find a smaller piece that really needs to be improved and focus solely on that piece while still striving to make it somewhat reasonable to use with future systems. But without trying to over-engineer it in to strive for an impossible goal of not modifying it at all in the future.
|
|
|
|
|
Follow up on my earlier posting here[^]...
Can I host my SignalR service on Azure? Anyone done this? Any pointers / thoughts??
If it's not broken, fix it until it is
|
|
|
|
|
|
Thanks.
I've been reading through that page. Not a bad site as far as learning SignalR. Just wanted to be sure there wasn't any caveats I'm not seeing.
If it's not broken, fix it until it is
|
|
|
|
|
I am working towards developing a system that has some of its components in Microsoft Azure and rest on premises. Some key components are the Azure Cloud Services wich has a web role (MVC web applicaiton) and Azure SQL DB as a local datastore. The core Database systems (system of records) are onpremise. Certain transactions follows a specific flow where in they need to be updated in the Azure SQL DB first and then it will be asyncronously updated in the core DB on premise, and then this update along with some other calculated values will flow back to the SQL Azure DB. This flow back to Azure SQL DB is planned using Azure SQL Data Sync. Azure Data sync has a minimum delay of 5 minutes between consecutive updates. The web will always display its data from the Azure SQL DB.
In a scenario where a customer updates a field - say name - immediately two or more times, the web applcation since it is getting data from Azure SQL DB will display the last updated value. However, since the core databases are updated asynchronously and these updates from the core databases will flow to the Azure SQL DB at a later point in time, the last updated record in Azure SQL DB by the web can be replaced with the values coming from the core database which was a result of the initial updates. This may cause some level of inconsistency with respect to user experience. The customer may see the latest updated "Name" as soon as he does his final update, but this value will change (to one of the earlier updates) after the data sync, and finally it will display the last updated value. Any recommendations on the best way to implement such a functionality?
Moreover, what would be the best option to sync an on premise SQL Server and an Azure SQL DB, if not Azure SQL Data Sync?
|
|
|
|
|
|
In my opinion, you have a (design) problem in how you send updates from "core" to Azure.
Other than a few (new) "calculations", you're sending redundant updates; that is not how one "syncs".
Create a proper calculation transaction.
|
|
|
|
|
Rajeshjoseph wrote: In a scenario where a customer updates a field - say name - immediately two or more times, ....Any recommendations on the best way to implement such a functionality?
Best suggestion I know of - stop making up business cases.
When exactly, in the real world, is a user going to update their name twice from two different locations at the same time?
And if they do how are you going to determine which one is 'correct'?
Apply that same reasoning to any other concurrency issues that you might come up with.
Unless you can answer both questions definitively then the answer is last one wins. And it wins by default without you doing anything.
|
|
|
|
|
The architecture was proposed this way due to the following reasons:
1. There are multiple channels through which the core backend systems (system of records) will get updated - Web, Mobile App etc. To have these updated records always reflecting in the cloud, we are bringing the data from backend to cloud if a record gets updated. Web always reads the data from the cloud.
2. Since we are decoupling the core backend systems from the system of access for web, we are building a data repository in cloud for the web to access which will not have direct communication to the core backend systems (asynchronous in nature).
3. Since the backend updates will be asynchronous (it can vary from near to real-time to 1 hour depending upon various factors), it may not be fair to display the customer a message "you request will be processed later" when the customer updates his/her profile information or similar (no customer would love to see his name change to reflect after xx minutes). For this reason, the update will be directly saved to the DB in the cloud and will be read from the there by the web.
4. I was trying to work out a solution for a scenario where a customer updates his "First Name" twice in the interval of say 5 minutes - in this scenario - with the planned architecture - the second update will be displayed on the web as soon as it is done, while the first update will be in transit, making it's way to the core database asynchronously and as I mentioned in step 1, the core backend systems will send the record to the web again if they are updated (the update can happen from multiple channels) and will update the cloud DB which will re-write the second update for this record in cloud. After few minutes, the second record also will complete the round trip and will reach the cloud. But, during the time between the second update saved in the cloud, and completes the round trip - there is a possibility that the customer notices the first update getting displayed for few minutes.
Hope this explains!
|
|
|
|
|
You could try keeping time stamps with the updated values. when syncing from the core db to azure, you could check the time stamp and only update the field if the time stamp is greater.
|
|
|
|
|
Rajeshjoseph wrote: I was trying to work out a solution for a scenario where a customer updates his "First Name" twice in the interval of say 5 minutes - in this scenario - with the planned architecture - the second update will be displayed on the web as soon as it is done,
And how does that change what I said?
What is the exact business scenario where the user is ever going to update their name in 5 minutes?
And given that there is in fact exactly that scenario how are you going to use technology to determine that the 'second' one is right and the first one is wrong?
Let me show how contrived this is with the following business scenario
- You have a single user who is gender challenged.
- That person has two cell phones and they are on the train to work
- The two cell phones use two different service providers and one which is slower due to connectivity issues due to the provider.
- That person is using each cell phone to change their name from 'Dan' to 'Sara' on their way to work
Is the above an actual business scenario? Is the something that the company actually wants to spend real money on to support this extreme corner case? How is your software going to actually determine which button that user pushed last just before they got off the train?
|
|
|
|
|
Google for "Windows Kiosk Mode", and/or try to give more details on what you are trying to do. As is, your question is too vague.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: your question is too vague And already posted three or four times elsewhere.
|
|
|
|
|
Didn't view the message history, but good reminder why I should have
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
|
This isn't even a question. It's ironic that someone so lazy that cannot even type a question would post the same non-question multiple times.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
Maybe it was an understated Eureka moment:
it window application form work in KIOSK (!!!)
(Yippee)
|
|
|
|
|
hi,
I'm reading about the flyweight design pattern. In my understanding it's basically something like 'use shared pointers/references to big objects in order to safe memory'.
Example: I'm writing a website, that contains 1MB pictures all over it. All the pictures are the same. Now, instead of transferring the picture x times, I. transfer it only once and let the website content always refer to that already transferred picture.
Am I right about that?
The often used example with the letters and glyph does really confuse me here, so I'd like to recheck.
I can't understand, why it would be better to have a referred letter instead of a simple char inside a class...
I really appreciate any help here!!
regards
modified 8-Feb-16 15:04pm.
|
|
|
|
|
You will need to be more specific about what you mean by "transferring".
Are you talking about content in a web page? FTP? What?
What / why 1 MB images? For display in a browser? Why not thumbnails?
|
|
|
|
|
Thank you for the answer!
yes, I was assuming a web page. But apparently the example was not appropriate and therefore missleading.
Maybe we should just skip the example.
So the "flyweight" in the "flyweight" pattern refers small objects, that all share some heavy ressource?
|
|
|
|
|
Yes; I would agree with your last statement.
"Small objects" that container references / pointers to "big" / data objects.
|
|
|
|
|
thank you
|
|
|
|