Service-oriented environments comprise a number of interconnected service providers and service consumers and are filled by vast numbers of services of various functionalities with different qualities. Finding the desirable services among others is a major problem for a typical user, who can optimize its performance by utilizing services with good qualities. This problem is sometimes addressed by relying on votes and advices about qualities of services collected from other agents in the environment. However, there is no guarantee that all agents give fair advices about all services. The presence of unfair or malicious agents, who tend to misinform others about the quality of services, makes it necessary to develop methods for distinguishing fair and unfair agents from each other and providing users with reliable trust information. This should be done according to the previous behavior of agents that represents their reputation and trustworthiness. Learning automata is an abstract model capable of adapting itself with interactions from the environment in a way to perform a specific functionality in the form of optimal actions. This has been the motivation for designing specific learning automata for separating groups of users based on their previous behavior which can be extracted from their votes on services available in the environment. The user can utilize this classification for judging the trustworthiness of other users. Here we propose an improved learning automata-based trust model for partitioning fair and unfair agents in service-oriented, multi-agent environments with effective partitioning performance and reliable service selection efficiency.