thanks for the answers.
regarding data modeling, i understand it is an iterative process.
sometimes, i feel a solution was over-engineered thus resulting in constant changes to database design. thats why my comment on data model ended up as a byproduct of the implementation.
then again i might be wrong, maybe the solution started off by data modeling first before making implementation changes.
back to topic:
the LDAP-like approach, can i say it is closed to EAV approach?
There is always this problem where computer science student tend to start coding before thinking it thoroughly. Before students ever got involved in real industrial projects, where mistakes cost dearly, they tend to feel likes get down to work as soon as possible. When you are in the industry and you are in charge of cost and schedule, you will start learn that planning makes all the differences. Even if there are implementations in place, you still plan before you even write the first line of code.
Same goes for any form of implementation, which in this case is about database design. Database layer is very often the place which cost you the most in performance, and if you started it wrong, you just have to continuously pay the technical debt. You can't just keep changing it even if the project has go live, because very often changing the underlying layer impact a lot of the application codes and overall solution. After your project goes live, it's even harder because now changes have to the into account of existing data and migration effort and risks of corrupting data along the way. The above is how a person like me whom has design and implement numerous systems would advise any newbies into software development.
That being said, the world is not perfect, so when your requirements are vague or incomplete and yet you still need to deliver something, you will need to always start off with whatever you have in hand. How well you start off will then depends on how experience you are. The more experience you are, the more you will design things that are adaptable, but yet one will also measure against over design which can incur excessive cost and schedule. Here how much is sufficient will depends on how good you are. There is really no hard and fast rule to determine even the possibilities are virtually infinite.
Hence it will be really hard to say what is right or wrong, or really what comes first. It depends on the engineer.
Yes LDAP when you stored in a database would be best modelled using EVA. That is possible inherently LDAP schema is extremely flexible. Also due to the inheritance model of each object in the tree can be extended by more than one Object Classes, that makes it possible for the object to bear a large number of attributes, but yet not every attributes will be assigned with values. An object with hundreds of attributes probably might only consist of attributes in magnitude of tens for really complex directories.
In your case, your question is which approach make sense for storing user-defined fields. You can actually use both approaches, you don't have to just settle for one. You can use the normal approach of providing schema for attributes that you know are mandatory. For custom user defined fields, you can use EVA approach. You can also employ the use of XML/JSON for some other fields which are complex on its own and need to be validated or dealt with in the application tier.
One software that I know uses all 3 approaches is Liferay CMS. Depending on what is the use case, the software employ different schema approach in storing information.